2026-03-17 00:00:06.421751 | Job console starting 2026-03-17 00:00:06.447498 | Updating git repos 2026-03-17 00:00:06.495724 | Cloning repos into workspace 2026-03-17 00:00:06.883326 | Restoring repo states 2026-03-17 00:00:06.943838 | Merging changes 2026-03-17 00:00:06.943864 | Checking out repos 2026-03-17 00:00:07.502531 | Preparing playbooks 2026-03-17 00:00:08.774847 | Running Ansible setup 2026-03-17 00:00:16.201199 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-03-17 00:00:18.803514 | 2026-03-17 00:00:18.803689 | PLAY [Base pre] 2026-03-17 00:00:18.843584 | 2026-03-17 00:00:18.843760 | TASK [Setup log path fact] 2026-03-17 00:00:18.876341 | orchestrator | ok 2026-03-17 00:00:18.919655 | 2026-03-17 00:00:18.919837 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-03-17 00:00:18.999716 | orchestrator | ok 2026-03-17 00:00:19.032718 | 2026-03-17 00:00:19.032871 | TASK [emit-job-header : Print job information] 2026-03-17 00:00:19.142762 | # Job Information 2026-03-17 00:00:19.143078 | Ansible Version: 2.16.14 2026-03-17 00:00:19.143171 | Job: testbed-deploy-current-in-a-nutshell-with-tempest-ubuntu-24.04 2026-03-17 00:00:19.143222 | Pipeline: periodic-midnight 2026-03-17 00:00:19.143247 | Executor: 521e9411259a 2026-03-17 00:00:19.143268 | Triggered by: https://github.com/osism/testbed 2026-03-17 00:00:19.143290 | Event ID: 7f7c98d488164e9a90f8fe7794c9d4c5 2026-03-17 00:00:19.158923 | 2026-03-17 00:00:19.159069 | LOOP [emit-job-header : Print node information] 2026-03-17 00:00:19.425652 | orchestrator | ok: 2026-03-17 00:00:19.425878 | orchestrator | # Node Information 2026-03-17 00:00:19.425915 | orchestrator | Inventory Hostname: orchestrator 2026-03-17 00:00:19.425940 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-03-17 00:00:19.425963 | orchestrator | Username: zuul-testbed01 2026-03-17 00:00:19.425984 | orchestrator | Distro: Debian 12.13 2026-03-17 00:00:19.426007 | orchestrator | Provider: static-testbed 2026-03-17 00:00:19.426028 | orchestrator | Region: 2026-03-17 00:00:19.426050 | orchestrator | Label: testbed-orchestrator 2026-03-17 00:00:19.426070 | orchestrator | Product Name: OpenStack Nova 2026-03-17 00:00:19.426089 | orchestrator | Interface IP: 81.163.193.140 2026-03-17 00:00:19.453475 | 2026-03-17 00:00:19.453624 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-03-17 00:00:21.524285 | orchestrator -> localhost | changed 2026-03-17 00:00:21.538539 | 2026-03-17 00:00:21.538707 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-03-17 00:00:25.527808 | orchestrator -> localhost | changed 2026-03-17 00:00:25.539100 | 2026-03-17 00:00:25.539190 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-03-17 00:00:26.268884 | orchestrator -> localhost | ok 2026-03-17 00:00:26.274522 | 2026-03-17 00:00:26.274623 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-03-17 00:00:26.314328 | orchestrator | ok 2026-03-17 00:00:26.340238 | orchestrator | included: /var/lib/zuul/builds/e0ee52d8e54949f4a7ff2f5852dacab8/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-03-17 00:00:26.377369 | 2026-03-17 00:00:26.377475 | TASK [add-build-sshkey : Create Temp SSH key] 2026-03-17 00:00:32.154245 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-03-17 00:00:32.155541 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/e0ee52d8e54949f4a7ff2f5852dacab8/work/e0ee52d8e54949f4a7ff2f5852dacab8_id_rsa 2026-03-17 00:00:32.155603 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/e0ee52d8e54949f4a7ff2f5852dacab8/work/e0ee52d8e54949f4a7ff2f5852dacab8_id_rsa.pub 2026-03-17 00:00:32.155628 | orchestrator -> localhost | The key fingerprint is: 2026-03-17 00:00:32.155652 | orchestrator -> localhost | SHA256:rYJtMfQIYf1CIC3bGxdGEafepdz5TwDhpxFFWAWs7i0 zuul-build-sshkey 2026-03-17 00:00:32.155671 | orchestrator -> localhost | The key's randomart image is: 2026-03-17 00:00:32.155697 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-03-17 00:00:32.155715 | orchestrator -> localhost | | ..+++o. o*=o. | 2026-03-17 00:00:32.155733 | orchestrator -> localhost | | .o..=o ..o. | 2026-03-17 00:00:32.155819 | orchestrator -> localhost | | +.ooo =.. | 2026-03-17 00:00:32.155842 | orchestrator -> localhost | | . o+o=.=.* | 2026-03-17 00:00:32.155860 | orchestrator -> localhost | | +=.S.= . | 2026-03-17 00:00:32.155884 | orchestrator -> localhost | | .o o ... . | 2026-03-17 00:00:32.155902 | orchestrator -> localhost | | . + .. .. . | 2026-03-17 00:00:32.155919 | orchestrator -> localhost | | . . E .o | 2026-03-17 00:00:32.155937 | orchestrator -> localhost | | . . | 2026-03-17 00:00:32.155954 | orchestrator -> localhost | +----[SHA256]-----+ 2026-03-17 00:00:32.156002 | orchestrator -> localhost | ok: Runtime: 0:00:04.386821 2026-03-17 00:00:32.162169 | 2026-03-17 00:00:32.162255 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-03-17 00:00:32.243403 | orchestrator | ok 2026-03-17 00:00:32.266854 | orchestrator | included: /var/lib/zuul/builds/e0ee52d8e54949f4a7ff2f5852dacab8/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-03-17 00:00:32.290561 | 2026-03-17 00:00:32.290662 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-03-17 00:00:32.324265 | orchestrator | skipping: Conditional result was False 2026-03-17 00:00:32.331775 | 2026-03-17 00:00:32.331879 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-03-17 00:00:33.011931 | orchestrator | changed 2026-03-17 00:00:33.017592 | 2026-03-17 00:00:33.017679 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-03-17 00:00:33.307352 | orchestrator | ok 2026-03-17 00:00:33.313968 | 2026-03-17 00:00:33.314055 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-03-17 00:00:33.842412 | orchestrator | ok 2026-03-17 00:00:33.852710 | 2026-03-17 00:00:33.852806 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-03-17 00:00:34.314733 | orchestrator | ok 2026-03-17 00:00:34.319703 | 2026-03-17 00:00:34.319781 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-03-17 00:00:34.364820 | orchestrator | skipping: Conditional result was False 2026-03-17 00:00:34.370929 | 2026-03-17 00:00:34.371018 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-03-17 00:00:35.427359 | orchestrator -> localhost | changed 2026-03-17 00:00:35.441427 | 2026-03-17 00:00:35.441516 | TASK [add-build-sshkey : Add back temp key] 2026-03-17 00:00:36.529994 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/e0ee52d8e54949f4a7ff2f5852dacab8/work/e0ee52d8e54949f4a7ff2f5852dacab8_id_rsa (zuul-build-sshkey) 2026-03-17 00:00:36.530182 | orchestrator -> localhost | ok: Runtime: 0:00:00.034448 2026-03-17 00:00:36.535956 | 2026-03-17 00:00:36.536040 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-03-17 00:00:37.161043 | orchestrator | ok 2026-03-17 00:00:37.165973 | 2026-03-17 00:00:37.166055 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-03-17 00:00:37.203709 | orchestrator | skipping: Conditional result was False 2026-03-17 00:00:37.360359 | 2026-03-17 00:00:37.360458 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-03-17 00:00:37.817178 | orchestrator | ok 2026-03-17 00:00:37.826047 | 2026-03-17 00:00:37.826131 | TASK [validate-host : Define zuul_info_dir fact] 2026-03-17 00:00:37.899482 | orchestrator | ok 2026-03-17 00:00:37.906067 | 2026-03-17 00:00:37.906141 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-03-17 00:00:38.771351 | orchestrator -> localhost | ok 2026-03-17 00:00:38.777250 | 2026-03-17 00:00:38.777341 | TASK [validate-host : Collect information about the host] 2026-03-17 00:00:40.401280 | orchestrator | ok 2026-03-17 00:00:40.432577 | 2026-03-17 00:00:40.432693 | TASK [validate-host : Sanitize hostname] 2026-03-17 00:00:40.536048 | orchestrator | ok 2026-03-17 00:00:40.540365 | 2026-03-17 00:00:40.540451 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-03-17 00:00:42.434355 | orchestrator -> localhost | changed 2026-03-17 00:00:42.439324 | 2026-03-17 00:00:42.439447 | TASK [validate-host : Collect information about zuul worker] 2026-03-17 00:00:43.194130 | orchestrator | ok 2026-03-17 00:00:43.198537 | 2026-03-17 00:00:43.198620 | TASK [validate-host : Write out all zuul information for each host] 2026-03-17 00:00:44.603531 | orchestrator -> localhost | changed 2026-03-17 00:00:44.612066 | 2026-03-17 00:00:44.612154 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-03-17 00:00:44.991679 | orchestrator | ok 2026-03-17 00:00:45.001692 | 2026-03-17 00:00:45.001810 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-03-17 00:02:16.371689 | orchestrator | changed: 2026-03-17 00:02:16.371927 | orchestrator | .d..t...... src/ 2026-03-17 00:02:16.371963 | orchestrator | .d..t...... src/github.com/ 2026-03-17 00:02:16.371988 | orchestrator | .d..t...... src/github.com/osism/ 2026-03-17 00:02:16.372011 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-03-17 00:02:16.372032 | orchestrator | RedHat.yml 2026-03-17 00:02:16.404078 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-03-17 00:02:16.404096 | orchestrator | RedHat.yml 2026-03-17 00:02:16.404148 | orchestrator | = 1.53.0"... 2026-03-17 00:02:27.506063 | orchestrator | - Finding hashicorp/local versions matching ">= 2.2.0"... 2026-03-17 00:02:27.634247 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-03-17 00:02:28.093953 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-03-17 00:02:28.150168 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-03-17 00:02:29.231767 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-03-17 00:02:29.288431 | orchestrator | - Installing hashicorp/local v2.7.0... 2026-03-17 00:02:29.740365 | orchestrator | - Installed hashicorp/local v2.7.0 (signed, key ID 0C0AF313E5FD9F80) 2026-03-17 00:02:29.740475 | orchestrator | 2026-03-17 00:02:29.740483 | orchestrator | Providers are signed by their developers. 2026-03-17 00:02:29.740489 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-03-17 00:02:29.740494 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-03-17 00:02:29.740502 | orchestrator | 2026-03-17 00:02:29.740507 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-03-17 00:02:29.740512 | orchestrator | selections it made above. Include this file in your version control repository 2026-03-17 00:02:29.740522 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-03-17 00:02:29.740527 | orchestrator | you run "tofu init" in the future. 2026-03-17 00:02:29.740532 | orchestrator | 2026-03-17 00:02:29.740536 | orchestrator | OpenTofu has been successfully initialized! 2026-03-17 00:02:29.740540 | orchestrator | 2026-03-17 00:02:29.740545 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-03-17 00:02:29.740549 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-03-17 00:02:29.740554 | orchestrator | should now work. 2026-03-17 00:02:29.740558 | orchestrator | 2026-03-17 00:02:29.740562 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-03-17 00:02:29.740566 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-03-17 00:02:29.740571 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-03-17 00:02:29.897408 | orchestrator | Created and switched to workspace "ci"! 2026-03-17 00:02:29.897467 | orchestrator | 2026-03-17 00:02:29.897475 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-03-17 00:02:29.897481 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-03-17 00:02:29.897489 | orchestrator | for this configuration. 2026-03-17 00:02:29.992061 | orchestrator | ci.auto.tfvars 2026-03-17 00:02:30.201513 | orchestrator | default_custom.tf 2026-03-17 00:02:32.220282 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-03-17 00:02:32.763623 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-03-17 00:02:33.140674 | orchestrator | 2026-03-17 00:02:33.140744 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-03-17 00:02:33.140753 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-03-17 00:02:33.140758 | orchestrator | + create 2026-03-17 00:02:33.140763 | orchestrator | <= read (data resources) 2026-03-17 00:02:33.140767 | orchestrator | 2026-03-17 00:02:33.140772 | orchestrator | OpenTofu will perform the following actions: 2026-03-17 00:02:33.140776 | orchestrator | 2026-03-17 00:02:33.140792 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-03-17 00:02:33.140796 | orchestrator | # (config refers to values not yet known) 2026-03-17 00:02:33.140800 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-03-17 00:02:33.140804 | orchestrator | + checksum = (known after apply) 2026-03-17 00:02:33.140808 | orchestrator | + created_at = (known after apply) 2026-03-17 00:02:33.140813 | orchestrator | + file = (known after apply) 2026-03-17 00:02:33.140817 | orchestrator | + id = (known after apply) 2026-03-17 00:02:33.140837 | orchestrator | + metadata = (known after apply) 2026-03-17 00:02:33.140841 | orchestrator | + min_disk_gb = (known after apply) 2026-03-17 00:02:33.140845 | orchestrator | + min_ram_mb = (known after apply) 2026-03-17 00:02:33.140849 | orchestrator | + most_recent = true 2026-03-17 00:02:33.140853 | orchestrator | + name = (known after apply) 2026-03-17 00:02:33.140857 | orchestrator | + protected = (known after apply) 2026-03-17 00:02:33.140861 | orchestrator | + region = (known after apply) 2026-03-17 00:02:33.140867 | orchestrator | + schema = (known after apply) 2026-03-17 00:02:33.140871 | orchestrator | + size_bytes = (known after apply) 2026-03-17 00:02:33.140875 | orchestrator | + tags = (known after apply) 2026-03-17 00:02:33.140879 | orchestrator | + updated_at = (known after apply) 2026-03-17 00:02:33.140883 | orchestrator | } 2026-03-17 00:02:33.140887 | orchestrator | 2026-03-17 00:02:33.140891 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-03-17 00:02:33.140895 | orchestrator | # (config refers to values not yet known) 2026-03-17 00:02:33.140899 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-03-17 00:02:33.140903 | orchestrator | + checksum = (known after apply) 2026-03-17 00:02:33.140906 | orchestrator | + created_at = (known after apply) 2026-03-17 00:02:33.140910 | orchestrator | + file = (known after apply) 2026-03-17 00:02:33.140914 | orchestrator | + id = (known after apply) 2026-03-17 00:02:33.140918 | orchestrator | + metadata = (known after apply) 2026-03-17 00:02:33.140922 | orchestrator | + min_disk_gb = (known after apply) 2026-03-17 00:02:33.140925 | orchestrator | + min_ram_mb = (known after apply) 2026-03-17 00:02:33.140929 | orchestrator | + most_recent = true 2026-03-17 00:02:33.140933 | orchestrator | + name = (known after apply) 2026-03-17 00:02:33.140937 | orchestrator | + protected = (known after apply) 2026-03-17 00:02:33.140940 | orchestrator | + region = (known after apply) 2026-03-17 00:02:33.140944 | orchestrator | + schema = (known after apply) 2026-03-17 00:02:33.140948 | orchestrator | + size_bytes = (known after apply) 2026-03-17 00:02:33.140952 | orchestrator | + tags = (known after apply) 2026-03-17 00:02:33.140955 | orchestrator | + updated_at = (known after apply) 2026-03-17 00:02:33.140959 | orchestrator | } 2026-03-17 00:02:33.140963 | orchestrator | 2026-03-17 00:02:33.140967 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-03-17 00:02:33.140971 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-03-17 00:02:33.140975 | orchestrator | + content = (known after apply) 2026-03-17 00:02:33.140979 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-17 00:02:33.140983 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-17 00:02:33.140987 | orchestrator | + content_md5 = (known after apply) 2026-03-17 00:02:33.140990 | orchestrator | + content_sha1 = (known after apply) 2026-03-17 00:02:33.140994 | orchestrator | + content_sha256 = (known after apply) 2026-03-17 00:02:33.140998 | orchestrator | + content_sha512 = (known after apply) 2026-03-17 00:02:33.141002 | orchestrator | + directory_permission = "0777" 2026-03-17 00:02:33.141005 | orchestrator | + file_permission = "0644" 2026-03-17 00:02:33.141009 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-03-17 00:02:33.141013 | orchestrator | + id = (known after apply) 2026-03-17 00:02:33.141017 | orchestrator | } 2026-03-17 00:02:33.141020 | orchestrator | 2026-03-17 00:02:33.141024 | orchestrator | # local_file.id_rsa_pub will be created 2026-03-17 00:02:33.141028 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-03-17 00:02:33.141032 | orchestrator | + content = (known after apply) 2026-03-17 00:02:33.141035 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-17 00:02:33.141039 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-17 00:02:33.141043 | orchestrator | + content_md5 = (known after apply) 2026-03-17 00:02:33.141047 | orchestrator | + content_sha1 = (known after apply) 2026-03-17 00:02:33.141051 | orchestrator | + content_sha256 = (known after apply) 2026-03-17 00:02:33.141054 | orchestrator | + content_sha512 = (known after apply) 2026-03-17 00:02:33.141058 | orchestrator | + directory_permission = "0777" 2026-03-17 00:02:33.141062 | orchestrator | + file_permission = "0644" 2026-03-17 00:02:33.141069 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-03-17 00:02:33.141073 | orchestrator | + id = (known after apply) 2026-03-17 00:02:33.141077 | orchestrator | } 2026-03-17 00:02:33.141081 | orchestrator | 2026-03-17 00:02:33.141089 | orchestrator | # local_file.inventory will be created 2026-03-17 00:02:33.141093 | orchestrator | + resource "local_file" "inventory" { 2026-03-17 00:02:33.141097 | orchestrator | + content = (known after apply) 2026-03-17 00:02:33.141100 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-17 00:02:33.141104 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-17 00:02:33.141108 | orchestrator | + content_md5 = (known after apply) 2026-03-17 00:02:33.141112 | orchestrator | + content_sha1 = (known after apply) 2026-03-17 00:02:33.141116 | orchestrator | + content_sha256 = (known after apply) 2026-03-17 00:02:33.141119 | orchestrator | + content_sha512 = (known after apply) 2026-03-17 00:02:33.141123 | orchestrator | + directory_permission = "0777" 2026-03-17 00:02:33.141127 | orchestrator | + file_permission = "0644" 2026-03-17 00:02:33.141131 | orchestrator | + filename = "inventory.ci" 2026-03-17 00:02:33.141134 | orchestrator | + id = (known after apply) 2026-03-17 00:02:33.141138 | orchestrator | } 2026-03-17 00:02:33.141142 | orchestrator | 2026-03-17 00:02:33.141146 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-03-17 00:02:33.141149 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-03-17 00:02:33.141153 | orchestrator | + content = (sensitive value) 2026-03-17 00:02:33.141157 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-17 00:02:33.141161 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-17 00:02:33.141164 | orchestrator | + content_md5 = (known after apply) 2026-03-17 00:02:33.141168 | orchestrator | + content_sha1 = (known after apply) 2026-03-17 00:02:33.141172 | orchestrator | + content_sha256 = (known after apply) 2026-03-17 00:02:33.141185 | orchestrator | + content_sha512 = (known after apply) 2026-03-17 00:02:33.141189 | orchestrator | + directory_permission = "0700" 2026-03-17 00:02:33.141193 | orchestrator | + file_permission = "0600" 2026-03-17 00:02:33.141196 | orchestrator | + filename = ".id_rsa.ci" 2026-03-17 00:02:33.141200 | orchestrator | + id = (known after apply) 2026-03-17 00:02:33.141204 | orchestrator | } 2026-03-17 00:02:33.141208 | orchestrator | 2026-03-17 00:02:33.141211 | orchestrator | # null_resource.node_semaphore will be created 2026-03-17 00:02:33.141215 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-03-17 00:02:33.141219 | orchestrator | + id = (known after apply) 2026-03-17 00:02:33.141223 | orchestrator | } 2026-03-17 00:02:33.141226 | orchestrator | 2026-03-17 00:02:33.141234 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-03-17 00:02:33.141238 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-03-17 00:02:33.141242 | orchestrator | + attachment = (known after apply) 2026-03-17 00:02:33.141245 | orchestrator | + availability_zone = "nova" 2026-03-17 00:02:33.141249 | orchestrator | + id = (known after apply) 2026-03-17 00:02:33.141253 | orchestrator | + image_id = (known after apply) 2026-03-17 00:02:33.141257 | orchestrator | + metadata = (known after apply) 2026-03-17 00:02:33.141260 | orchestrator | + name = "testbed-volume-manager-base" 2026-03-17 00:02:33.141264 | orchestrator | + region = (known after apply) 2026-03-17 00:02:33.141268 | orchestrator | + size = 80 2026-03-17 00:02:33.141272 | orchestrator | + volume_retype_policy = "never" 2026-03-17 00:02:33.141276 | orchestrator | + volume_type = "ssd" 2026-03-17 00:02:33.141279 | orchestrator | } 2026-03-17 00:02:33.141283 | orchestrator | 2026-03-17 00:02:33.141287 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-03-17 00:02:33.141291 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-17 00:02:33.141294 | orchestrator | + attachment = (known after apply) 2026-03-17 00:02:33.141298 | orchestrator | + availability_zone = "nova" 2026-03-17 00:02:33.141302 | orchestrator | + id = (known after apply) 2026-03-17 00:02:33.141309 | orchestrator | + image_id = (known after apply) 2026-03-17 00:02:33.141313 | orchestrator | + metadata = (known after apply) 2026-03-17 00:02:33.141317 | orchestrator | + name = "testbed-volume-0-node-base" 2026-03-17 00:02:33.141320 | orchestrator | + region = (known after apply) 2026-03-17 00:02:33.141324 | orchestrator | + size = 80 2026-03-17 00:02:33.141328 | orchestrator | + volume_retype_policy = "never" 2026-03-17 00:02:33.141332 | orchestrator | + volume_type = "ssd" 2026-03-17 00:02:33.141335 | orchestrator | } 2026-03-17 00:02:33.141339 | orchestrator | 2026-03-17 00:02:33.141343 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-03-17 00:02:33.141347 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-17 00:02:33.141350 | orchestrator | + attachment = (known after apply) 2026-03-17 00:02:33.141354 | orchestrator | + availability_zone = "nova" 2026-03-17 00:02:33.141358 | orchestrator | + id = (known after apply) 2026-03-17 00:02:33.141362 | orchestrator | + image_id = (known after apply) 2026-03-17 00:02:33.141365 | orchestrator | + metadata = (known after apply) 2026-03-17 00:02:33.141369 | orchestrator | + name = "testbed-volume-1-node-base" 2026-03-17 00:02:33.141373 | orchestrator | + region = (known after apply) 2026-03-17 00:02:33.141376 | orchestrator | + size = 80 2026-03-17 00:02:33.141380 | orchestrator | + volume_retype_policy = "never" 2026-03-17 00:02:33.141384 | orchestrator | + volume_type = "ssd" 2026-03-17 00:02:33.141388 | orchestrator | } 2026-03-17 00:02:33.141392 | orchestrator | 2026-03-17 00:02:33.141395 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-03-17 00:02:33.141399 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-17 00:02:33.141403 | orchestrator | + attachment = (known after apply) 2026-03-17 00:02:33.141407 | orchestrator | + availability_zone = "nova" 2026-03-17 00:02:33.141410 | orchestrator | + id = (known after apply) 2026-03-17 00:02:33.141414 | orchestrator | + image_id = (known after apply) 2026-03-17 00:02:33.141418 | orchestrator | + metadata = (known after apply) 2026-03-17 00:02:33.141422 | orchestrator | + name = "testbed-volume-2-node-base" 2026-03-17 00:02:33.141425 | orchestrator | + region = (known after apply) 2026-03-17 00:02:33.141429 | orchestrator | + size = 80 2026-03-17 00:02:33.141433 | orchestrator | + volume_retype_policy = "never" 2026-03-17 00:02:33.141436 | orchestrator | + volume_type = "ssd" 2026-03-17 00:02:33.141440 | orchestrator | } 2026-03-17 00:02:33.141444 | orchestrator | 2026-03-17 00:02:33.141448 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-03-17 00:02:33.141452 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-17 00:02:33.141455 | orchestrator | + attachment = (known after apply) 2026-03-17 00:02:33.141459 | orchestrator | + availability_zone = "nova" 2026-03-17 00:02:33.141463 | orchestrator | + id = (known after apply) 2026-03-17 00:02:33.141467 | orchestrator | + image_id = (known after apply) 2026-03-17 00:02:33.141470 | orchestrator | + metadata = (known after apply) 2026-03-17 00:02:33.141476 | orchestrator | + name = "testbed-volume-3-node-base" 2026-03-17 00:02:33.141480 | orchestrator | + region = (known after apply) 2026-03-17 00:02:33.141484 | orchestrator | + size = 80 2026-03-17 00:02:33.141488 | orchestrator | + volume_retype_policy = "never" 2026-03-17 00:02:33.141491 | orchestrator | + volume_type = "ssd" 2026-03-17 00:02:33.141495 | orchestrator | } 2026-03-17 00:02:33.141499 | orchestrator | 2026-03-17 00:02:33.141503 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-03-17 00:02:33.141506 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-17 00:02:33.141510 | orchestrator | + attachment = (known after apply) 2026-03-17 00:02:33.141514 | orchestrator | + availability_zone = "nova" 2026-03-17 00:02:33.141518 | orchestrator | + id = (known after apply) 2026-03-17 00:02:33.141524 | orchestrator | + image_id = (known after apply) 2026-03-17 00:02:33.141528 | orchestrator | + metadata = (known after apply) 2026-03-17 00:02:33.141532 | orchestrator | + name = "testbed-volume-4-node-base" 2026-03-17 00:02:33.141535 | orchestrator | + region = (known after apply) 2026-03-17 00:02:33.141539 | orchestrator | + size = 80 2026-03-17 00:02:33.141543 | orchestrator | + volume_retype_policy = "never" 2026-03-17 00:02:33.141547 | orchestrator | + volume_type = "ssd" 2026-03-17 00:02:33.141550 | orchestrator | } 2026-03-17 00:02:33.141554 | orchestrator | 2026-03-17 00:02:33.141558 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-03-17 00:02:33.141564 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-17 00:02:33.141568 | orchestrator | + attachment = (known after apply) 2026-03-17 00:02:33.141572 | orchestrator | + availability_zone = "nova" 2026-03-17 00:02:33.141575 | orchestrator | + id = (known after apply) 2026-03-17 00:02:33.141579 | orchestrator | + image_id = (known after apply) 2026-03-17 00:02:33.141583 | orchestrator | + metadata = (known after apply) 2026-03-17 00:02:33.141587 | orchestrator | + name = "testbed-volume-5-node-base" 2026-03-17 00:02:33.141590 | orchestrator | + region = (known after apply) 2026-03-17 00:02:33.141594 | orchestrator | + size = 80 2026-03-17 00:02:33.141598 | orchestrator | + volume_retype_policy = "never" 2026-03-17 00:02:33.141602 | orchestrator | + volume_type = "ssd" 2026-03-17 00:02:33.141605 | orchestrator | } 2026-03-17 00:02:33.141609 | orchestrator | 2026-03-17 00:02:33.141613 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-03-17 00:02:33.141617 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-17 00:02:33.141620 | orchestrator | + attachment = (known after apply) 2026-03-17 00:02:33.141624 | orchestrator | + availability_zone = "nova" 2026-03-17 00:02:33.141628 | orchestrator | + id = (known after apply) 2026-03-17 00:02:33.141632 | orchestrator | + metadata = (known after apply) 2026-03-17 00:02:33.141635 | orchestrator | + name = "testbed-volume-0-node-3" 2026-03-17 00:02:33.141639 | orchestrator | + region = (known after apply) 2026-03-17 00:02:33.141643 | orchestrator | + size = 20 2026-03-17 00:02:33.141647 | orchestrator | + volume_retype_policy = "never" 2026-03-17 00:02:33.141650 | orchestrator | + volume_type = "ssd" 2026-03-17 00:02:33.141654 | orchestrator | } 2026-03-17 00:02:33.141658 | orchestrator | 2026-03-17 00:02:33.141662 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-03-17 00:02:33.141665 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-17 00:02:33.141669 | orchestrator | + attachment = (known after apply) 2026-03-17 00:02:33.141673 | orchestrator | + availability_zone = "nova" 2026-03-17 00:02:33.141676 | orchestrator | + id = (known after apply) 2026-03-17 00:02:33.141680 | orchestrator | + metadata = (known after apply) 2026-03-17 00:02:33.141684 | orchestrator | + name = "testbed-volume-1-node-4" 2026-03-17 00:02:33.141688 | orchestrator | + region = (known after apply) 2026-03-17 00:02:33.141691 | orchestrator | + size = 20 2026-03-17 00:02:33.141695 | orchestrator | + volume_retype_policy = "never" 2026-03-17 00:02:33.141699 | orchestrator | + volume_type = "ssd" 2026-03-17 00:02:33.141703 | orchestrator | } 2026-03-17 00:02:33.141706 | orchestrator | 2026-03-17 00:02:33.141710 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-03-17 00:02:33.141714 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-17 00:02:33.141718 | orchestrator | + attachment = (known after apply) 2026-03-17 00:02:33.141721 | orchestrator | + availability_zone = "nova" 2026-03-17 00:02:33.141725 | orchestrator | + id = (known after apply) 2026-03-17 00:02:33.141729 | orchestrator | + metadata = (known after apply) 2026-03-17 00:02:33.141732 | orchestrator | + name = "testbed-volume-2-node-5" 2026-03-17 00:02:33.141736 | orchestrator | + region = (known after apply) 2026-03-17 00:02:33.141742 | orchestrator | + size = 20 2026-03-17 00:02:33.141746 | orchestrator | + volume_retype_policy = "never" 2026-03-17 00:02:33.141750 | orchestrator | + volume_type = "ssd" 2026-03-17 00:02:33.141754 | orchestrator | } 2026-03-17 00:02:33.141758 | orchestrator | 2026-03-17 00:02:33.141761 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-03-17 00:02:33.141765 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-17 00:02:33.141769 | orchestrator | + attachment = (known after apply) 2026-03-17 00:02:33.141772 | orchestrator | + availability_zone = "nova" 2026-03-17 00:02:33.141776 | orchestrator | + id = (known after apply) 2026-03-17 00:02:33.141793 | orchestrator | + metadata = (known after apply) 2026-03-17 00:02:33.141796 | orchestrator | + name = "testbed-volume-3-node-3" 2026-03-17 00:02:33.141800 | orchestrator | + region = (known after apply) 2026-03-17 00:02:33.141804 | orchestrator | + size = 20 2026-03-17 00:02:33.141808 | orchestrator | + volume_retype_policy = "never" 2026-03-17 00:02:33.141812 | orchestrator | + volume_type = "ssd" 2026-03-17 00:02:33.141815 | orchestrator | } 2026-03-17 00:02:33.141819 | orchestrator | 2026-03-17 00:02:33.141823 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-03-17 00:02:33.141827 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-17 00:02:33.141830 | orchestrator | + attachment = (known after apply) 2026-03-17 00:02:33.141834 | orchestrator | + availability_zone = "nova" 2026-03-17 00:02:33.141838 | orchestrator | + id = (known after apply) 2026-03-17 00:02:33.141842 | orchestrator | + metadata = (known after apply) 2026-03-17 00:02:33.141846 | orchestrator | + name = "testbed-volume-4-node-4" 2026-03-17 00:02:33.141849 | orchestrator | + region = (known after apply) 2026-03-17 00:02:33.141856 | orchestrator | + size = 20 2026-03-17 00:02:33.141860 | orchestrator | + volume_retype_policy = "never" 2026-03-17 00:02:33.141863 | orchestrator | + volume_type = "ssd" 2026-03-17 00:02:33.141867 | orchestrator | } 2026-03-17 00:02:33.141871 | orchestrator | 2026-03-17 00:02:33.141875 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-03-17 00:02:33.141879 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-17 00:02:33.141882 | orchestrator | + attachment = (known after apply) 2026-03-17 00:02:33.141886 | orchestrator | + availability_zone = "nova" 2026-03-17 00:02:33.141890 | orchestrator | + id = (known after apply) 2026-03-17 00:02:33.141894 | orchestrator | + metadata = (known after apply) 2026-03-17 00:02:33.141897 | orchestrator | + name = "testbed-volume-5-node-5" 2026-03-17 00:02:33.141901 | orchestrator | + region = (known after apply) 2026-03-17 00:02:33.141905 | orchestrator | + size = 20 2026-03-17 00:02:33.141909 | orchestrator | + volume_retype_policy = "never" 2026-03-17 00:02:33.141912 | orchestrator | + volume_type = "ssd" 2026-03-17 00:02:33.141916 | orchestrator | } 2026-03-17 00:02:33.141920 | orchestrator | 2026-03-17 00:02:33.141923 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-03-17 00:02:33.141927 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-17 00:02:33.141931 | orchestrator | + attachment = (known after apply) 2026-03-17 00:02:33.141935 | orchestrator | + availability_zone = "nova" 2026-03-17 00:02:33.141938 | orchestrator | + id = (known after apply) 2026-03-17 00:02:33.141946 | orchestrator | + metadata = (known after apply) 2026-03-17 00:02:33.141950 | orchestrator | + name = "testbed-volume-6-node-3" 2026-03-17 00:02:33.141954 | orchestrator | + region = (known after apply) 2026-03-17 00:02:33.141958 | orchestrator | + size = 20 2026-03-17 00:02:33.141962 | orchestrator | + volume_retype_policy = "never" 2026-03-17 00:02:33.141965 | orchestrator | + volume_type = "ssd" 2026-03-17 00:02:33.141969 | orchestrator | } 2026-03-17 00:02:33.141973 | orchestrator | 2026-03-17 00:02:33.141977 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-03-17 00:02:33.141980 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-17 00:02:33.141987 | orchestrator | + attachment = (known after apply) 2026-03-17 00:02:33.141991 | orchestrator | + availability_zone = "nova" 2026-03-17 00:02:33.141995 | orchestrator | + id = (known after apply) 2026-03-17 00:02:33.141998 | orchestrator | + metadata = (known after apply) 2026-03-17 00:02:33.142002 | orchestrator | + name = "testbed-volume-7-node-4" 2026-03-17 00:02:33.142006 | orchestrator | + region = (known after apply) 2026-03-17 00:02:33.142010 | orchestrator | + size = 20 2026-03-17 00:02:33.142028 | orchestrator | + volume_retype_policy = "never" 2026-03-17 00:02:33.142033 | orchestrator | + volume_type = "ssd" 2026-03-17 00:02:33.142040 | orchestrator | } 2026-03-17 00:02:33.142044 | orchestrator | 2026-03-17 00:02:33.142048 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-03-17 00:02:33.142051 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-17 00:02:33.142055 | orchestrator | + attachment = (known after apply) 2026-03-17 00:02:33.142059 | orchestrator | + availability_zone = "nova" 2026-03-17 00:02:33.142063 | orchestrator | + id = (known after apply) 2026-03-17 00:02:33.142066 | orchestrator | + metadata = (known after apply) 2026-03-17 00:02:33.142070 | orchestrator | + name = "testbed-volume-8-node-5" 2026-03-17 00:02:33.142074 | orchestrator | + region = (known after apply) 2026-03-17 00:02:33.142078 | orchestrator | + size = 20 2026-03-17 00:02:33.142081 | orchestrator | + volume_retype_policy = "never" 2026-03-17 00:02:33.142085 | orchestrator | + volume_type = "ssd" 2026-03-17 00:02:33.142089 | orchestrator | } 2026-03-17 00:02:33.156156 | orchestrator | 2026-03-17 00:02:33.156218 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-03-17 00:02:33.156223 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-03-17 00:02:33.156228 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-17 00:02:33.156233 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-17 00:02:33.156237 | orchestrator | + all_metadata = (known after apply) 2026-03-17 00:02:33.156241 | orchestrator | + all_tags = (known after apply) 2026-03-17 00:02:33.156245 | orchestrator | + availability_zone = "nova" 2026-03-17 00:02:33.156249 | orchestrator | + config_drive = true 2026-03-17 00:02:33.156253 | orchestrator | + created = (known after apply) 2026-03-17 00:02:33.156257 | orchestrator | + flavor_id = (known after apply) 2026-03-17 00:02:33.156261 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-03-17 00:02:33.156264 | orchestrator | + force_delete = false 2026-03-17 00:02:33.156268 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-17 00:02:33.156272 | orchestrator | + id = (known after apply) 2026-03-17 00:02:33.156276 | orchestrator | + image_id = (known after apply) 2026-03-17 00:02:33.156279 | orchestrator | + image_name = (known after apply) 2026-03-17 00:02:33.156283 | orchestrator | + key_pair = "testbed" 2026-03-17 00:02:33.156287 | orchestrator | + name = "testbed-manager" 2026-03-17 00:02:33.156291 | orchestrator | + power_state = "active" 2026-03-17 00:02:33.156295 | orchestrator | + region = (known after apply) 2026-03-17 00:02:33.156299 | orchestrator | + security_groups = (known after apply) 2026-03-17 00:02:33.156302 | orchestrator | + stop_before_destroy = false 2026-03-17 00:02:33.156306 | orchestrator | + updated = (known after apply) 2026-03-17 00:02:33.156310 | orchestrator | + user_data = (sensitive value) 2026-03-17 00:02:33.156314 | orchestrator | 2026-03-17 00:02:33.156318 | orchestrator | + block_device { 2026-03-17 00:02:33.156322 | orchestrator | + boot_index = 0 2026-03-17 00:02:33.156326 | orchestrator | + delete_on_termination = false 2026-03-17 00:02:33.156339 | orchestrator | + destination_type = "volume" 2026-03-17 00:02:33.156343 | orchestrator | + multiattach = false 2026-03-17 00:02:33.156347 | orchestrator | + source_type = "volume" 2026-03-17 00:02:33.156351 | orchestrator | + uuid = (known after apply) 2026-03-17 00:02:33.156366 | orchestrator | } 2026-03-17 00:02:33.156371 | orchestrator | 2026-03-17 00:02:33.156375 | orchestrator | + network { 2026-03-17 00:02:33.156379 | orchestrator | + access_network = false 2026-03-17 00:02:33.156382 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-17 00:02:33.156386 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-17 00:02:33.156390 | orchestrator | + mac = (known after apply) 2026-03-17 00:02:33.156394 | orchestrator | + name = (known after apply) 2026-03-17 00:02:33.156398 | orchestrator | + port = (known after apply) 2026-03-17 00:02:33.156402 | orchestrator | + uuid = (known after apply) 2026-03-17 00:02:33.156405 | orchestrator | } 2026-03-17 00:02:33.156410 | orchestrator | } 2026-03-17 00:02:33.156413 | orchestrator | 2026-03-17 00:02:33.156417 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-03-17 00:02:33.156421 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-17 00:02:33.156425 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-17 00:02:33.156429 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-17 00:02:33.156432 | orchestrator | + all_metadata = (known after apply) 2026-03-17 00:02:33.156436 | orchestrator | + all_tags = (known after apply) 2026-03-17 00:02:33.156440 | orchestrator | + availability_zone = "nova" 2026-03-17 00:02:33.156444 | orchestrator | + config_drive = true 2026-03-17 00:02:33.156448 | orchestrator | + created = (known after apply) 2026-03-17 00:02:33.156451 | orchestrator | + flavor_id = (known after apply) 2026-03-17 00:02:33.156455 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-17 00:02:33.156459 | orchestrator | + force_delete = false 2026-03-17 00:02:33.156463 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-17 00:02:33.156467 | orchestrator | + id = (known after apply) 2026-03-17 00:02:33.156471 | orchestrator | + image_id = (known after apply) 2026-03-17 00:02:33.156474 | orchestrator | + image_name = (known after apply) 2026-03-17 00:02:33.156478 | orchestrator | + key_pair = "testbed" 2026-03-17 00:02:33.156482 | orchestrator | + name = "testbed-node-0" 2026-03-17 00:02:33.156486 | orchestrator | + power_state = "active" 2026-03-17 00:02:33.156490 | orchestrator | + region = (known after apply) 2026-03-17 00:02:33.156493 | orchestrator | + security_groups = (known after apply) 2026-03-17 00:02:33.156497 | orchestrator | + stop_before_destroy = false 2026-03-17 00:02:33.156501 | orchestrator | + updated = (known after apply) 2026-03-17 00:02:33.156505 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-17 00:02:33.156509 | orchestrator | 2026-03-17 00:02:33.156513 | orchestrator | + block_device { 2026-03-17 00:02:33.156517 | orchestrator | + boot_index = 0 2026-03-17 00:02:33.156521 | orchestrator | + delete_on_termination = false 2026-03-17 00:02:33.156525 | orchestrator | + destination_type = "volume" 2026-03-17 00:02:33.156528 | orchestrator | + multiattach = false 2026-03-17 00:02:33.156532 | orchestrator | + source_type = "volume" 2026-03-17 00:02:33.156536 | orchestrator | + uuid = (known after apply) 2026-03-17 00:02:33.156540 | orchestrator | } 2026-03-17 00:02:33.156544 | orchestrator | 2026-03-17 00:02:33.156548 | orchestrator | + network { 2026-03-17 00:02:33.156551 | orchestrator | + access_network = false 2026-03-17 00:02:33.156555 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-17 00:02:33.156559 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-17 00:02:33.156563 | orchestrator | + mac = (known after apply) 2026-03-17 00:02:33.156567 | orchestrator | + name = (known after apply) 2026-03-17 00:02:33.156571 | orchestrator | + port = (known after apply) 2026-03-17 00:02:33.156574 | orchestrator | + uuid = (known after apply) 2026-03-17 00:02:33.156578 | orchestrator | } 2026-03-17 00:02:33.156582 | orchestrator | } 2026-03-17 00:02:33.156586 | orchestrator | 2026-03-17 00:02:33.156590 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-03-17 00:02:33.156594 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-17 00:02:33.156597 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-17 00:02:33.156608 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-17 00:02:33.156611 | orchestrator | + all_metadata = (known after apply) 2026-03-17 00:02:33.156615 | orchestrator | + all_tags = (known after apply) 2026-03-17 00:02:33.156619 | orchestrator | + availability_zone = "nova" 2026-03-17 00:02:33.156623 | orchestrator | + config_drive = true 2026-03-17 00:02:33.156627 | orchestrator | + created = (known after apply) 2026-03-17 00:02:33.156641 | orchestrator | + flavor_id = (known after apply) 2026-03-17 00:02:33.156645 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-17 00:02:33.156649 | orchestrator | + force_delete = false 2026-03-17 00:02:33.156653 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-17 00:02:33.156657 | orchestrator | + id = (known after apply) 2026-03-17 00:02:33.156661 | orchestrator | + image_id = (known after apply) 2026-03-17 00:02:33.156664 | orchestrator | + image_name = (known after apply) 2026-03-17 00:02:33.156668 | orchestrator | + key_pair = "testbed" 2026-03-17 00:02:33.156672 | orchestrator | + name = "testbed-node-1" 2026-03-17 00:02:33.156676 | orchestrator | + power_state = "active" 2026-03-17 00:02:33.156680 | orchestrator | + region = (known after apply) 2026-03-17 00:02:33.156684 | orchestrator | + security_groups = (known after apply) 2026-03-17 00:02:33.156687 | orchestrator | + stop_before_destroy = false 2026-03-17 00:02:33.156691 | orchestrator | + updated = (known after apply) 2026-03-17 00:02:33.156695 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-17 00:02:33.156699 | orchestrator | 2026-03-17 00:02:33.156703 | orchestrator | + block_device { 2026-03-17 00:02:33.156707 | orchestrator | + boot_index = 0 2026-03-17 00:02:33.156710 | orchestrator | + delete_on_termination = false 2026-03-17 00:02:33.156714 | orchestrator | + destination_type = "volume" 2026-03-17 00:02:33.156718 | orchestrator | + multiattach = false 2026-03-17 00:02:33.156722 | orchestrator | + source_type = "volume" 2026-03-17 00:02:33.156726 | orchestrator | + uuid = (known after apply) 2026-03-17 00:02:33.156729 | orchestrator | } 2026-03-17 00:02:33.156733 | orchestrator | 2026-03-17 00:02:33.156737 | orchestrator | + network { 2026-03-17 00:02:33.156741 | orchestrator | + access_network = false 2026-03-17 00:02:33.156744 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-17 00:02:33.156748 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-17 00:02:33.156752 | orchestrator | + mac = (known after apply) 2026-03-17 00:02:33.156756 | orchestrator | + name = (known after apply) 2026-03-17 00:02:33.156760 | orchestrator | + port = (known after apply) 2026-03-17 00:02:33.156763 | orchestrator | + uuid = (known after apply) 2026-03-17 00:02:33.156767 | orchestrator | } 2026-03-17 00:02:33.156771 | orchestrator | } 2026-03-17 00:02:33.156775 | orchestrator | 2026-03-17 00:02:33.156803 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-03-17 00:02:33.156807 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-17 00:02:33.156811 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-17 00:02:33.156815 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-17 00:02:33.156821 | orchestrator | + all_metadata = (known after apply) 2026-03-17 00:02:33.156825 | orchestrator | + all_tags = (known after apply) 2026-03-17 00:02:33.156831 | orchestrator | + availability_zone = "nova" 2026-03-17 00:02:33.156835 | orchestrator | + config_drive = true 2026-03-17 00:02:33.156839 | orchestrator | + created = (known after apply) 2026-03-17 00:02:33.156843 | orchestrator | + flavor_id = (known after apply) 2026-03-17 00:02:33.156847 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-17 00:02:33.156850 | orchestrator | + force_delete = false 2026-03-17 00:02:33.156854 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-17 00:02:33.156858 | orchestrator | + id = (known after apply) 2026-03-17 00:02:33.156862 | orchestrator | + image_id = (known after apply) 2026-03-17 00:02:33.156869 | orchestrator | + image_name = (known after apply) 2026-03-17 00:02:33.156873 | orchestrator | + key_pair = "testbed" 2026-03-17 00:02:33.156877 | orchestrator | + name = "testbed-node-2" 2026-03-17 00:02:33.156881 | orchestrator | + power_state = "active" 2026-03-17 00:02:33.156884 | orchestrator | + region = (known after apply) 2026-03-17 00:02:33.156888 | orchestrator | + security_groups = (known after apply) 2026-03-17 00:02:33.156892 | orchestrator | + stop_before_destroy = false 2026-03-17 00:02:33.156896 | orchestrator | + updated = (known after apply) 2026-03-17 00:02:33.156900 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-17 00:02:33.156903 | orchestrator | 2026-03-17 00:02:33.156907 | orchestrator | + block_device { 2026-03-17 00:02:33.156911 | orchestrator | + boot_index = 0 2026-03-17 00:02:33.156914 | orchestrator | + delete_on_termination = false 2026-03-17 00:02:33.156918 | orchestrator | + destination_type = "volume" 2026-03-17 00:02:33.156922 | orchestrator | + multiattach = false 2026-03-17 00:02:33.156926 | orchestrator | + source_type = "volume" 2026-03-17 00:02:33.156930 | orchestrator | + uuid = (known after apply) 2026-03-17 00:02:33.156933 | orchestrator | } 2026-03-17 00:02:33.156937 | orchestrator | 2026-03-17 00:02:33.156941 | orchestrator | + network { 2026-03-17 00:02:33.156945 | orchestrator | + access_network = false 2026-03-17 00:02:33.156948 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-17 00:02:33.156952 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-17 00:02:33.156956 | orchestrator | + mac = (known after apply) 2026-03-17 00:02:33.156960 | orchestrator | + name = (known after apply) 2026-03-17 00:02:33.156963 | orchestrator | + port = (known after apply) 2026-03-17 00:02:33.156967 | orchestrator | + uuid = (known after apply) 2026-03-17 00:02:33.156971 | orchestrator | } 2026-03-17 00:02:33.156975 | orchestrator | } 2026-03-17 00:02:33.156978 | orchestrator | 2026-03-17 00:02:33.156982 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-03-17 00:02:33.156986 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-17 00:02:33.156990 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-17 00:02:33.156994 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-17 00:02:33.156997 | orchestrator | + all_metadata = (known after apply) 2026-03-17 00:02:33.157001 | orchestrator | + all_tags = (known after apply) 2026-03-17 00:02:33.157005 | orchestrator | + availability_zone = "nova" 2026-03-17 00:02:33.157008 | orchestrator | + config_drive = true 2026-03-17 00:02:33.157012 | orchestrator | + created = (known after apply) 2026-03-17 00:02:33.157016 | orchestrator | + flavor_id = (known after apply) 2026-03-17 00:02:33.157020 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-17 00:02:33.157023 | orchestrator | + force_delete = false 2026-03-17 00:02:33.157027 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-17 00:02:33.157031 | orchestrator | + id = (known after apply) 2026-03-17 00:02:33.157035 | orchestrator | + image_id = (known after apply) 2026-03-17 00:02:33.157039 | orchestrator | + image_name = (known after apply) 2026-03-17 00:02:33.157042 | orchestrator | + key_pair = "testbed" 2026-03-17 00:02:33.157046 | orchestrator | + name = "testbed-node-3" 2026-03-17 00:02:33.157053 | orchestrator | + power_state = "active" 2026-03-17 00:02:33.157057 | orchestrator | + region = (known after apply) 2026-03-17 00:02:33.157061 | orchestrator | + security_groups = (known after apply) 2026-03-17 00:02:33.157064 | orchestrator | + stop_before_destroy = false 2026-03-17 00:02:33.157068 | orchestrator | + updated = (known after apply) 2026-03-17 00:02:33.157072 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-17 00:02:33.157076 | orchestrator | 2026-03-17 00:02:33.157080 | orchestrator | + block_device { 2026-03-17 00:02:33.157086 | orchestrator | + boot_index = 0 2026-03-17 00:02:33.157089 | orchestrator | + delete_on_termination = false 2026-03-17 00:02:33.157093 | orchestrator | + destination_type = "volume" 2026-03-17 00:02:33.157100 | orchestrator | + multiattach = false 2026-03-17 00:02:33.157104 | orchestrator | + source_type = "volume" 2026-03-17 00:02:33.157108 | orchestrator | + uuid = (known after apply) 2026-03-17 00:02:33.157111 | orchestrator | } 2026-03-17 00:02:33.157115 | orchestrator | 2026-03-17 00:02:33.157119 | orchestrator | + network { 2026-03-17 00:02:33.157123 | orchestrator | + access_network = false 2026-03-17 00:02:33.157127 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-17 00:02:33.157130 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-17 00:02:33.157134 | orchestrator | + mac = (known after apply) 2026-03-17 00:02:33.157138 | orchestrator | + name = (known after apply) 2026-03-17 00:02:33.157142 | orchestrator | + port = (known after apply) 2026-03-17 00:02:33.157145 | orchestrator | + uuid = (known after apply) 2026-03-17 00:02:33.157149 | orchestrator | } 2026-03-17 00:02:33.157153 | orchestrator | } 2026-03-17 00:02:33.157157 | orchestrator | 2026-03-17 00:02:33.157160 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-03-17 00:02:33.157164 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-17 00:02:33.157168 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-17 00:02:33.157172 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-17 00:02:33.157176 | orchestrator | + all_metadata = (known after apply) 2026-03-17 00:02:33.157179 | orchestrator | + all_tags = (known after apply) 2026-03-17 00:02:33.157183 | orchestrator | + availability_zone = "nova" 2026-03-17 00:02:33.157187 | orchestrator | + config_drive = true 2026-03-17 00:02:33.157191 | orchestrator | + created = (known after apply) 2026-03-17 00:02:33.157194 | orchestrator | + flavor_id = (known after apply) 2026-03-17 00:02:33.157198 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-17 00:02:33.157202 | orchestrator | + force_delete = false 2026-03-17 00:02:33.157206 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-17 00:02:33.157209 | orchestrator | + id = (known after apply) 2026-03-17 00:02:33.157213 | orchestrator | + image_id = (known after apply) 2026-03-17 00:02:33.157217 | orchestrator | + image_name = (known after apply) 2026-03-17 00:02:33.157221 | orchestrator | + key_pair = "testbed" 2026-03-17 00:02:33.157224 | orchestrator | + name = "testbed-node-4" 2026-03-17 00:02:33.157228 | orchestrator | + power_state = "active" 2026-03-17 00:02:33.157232 | orchestrator | + region = (known after apply) 2026-03-17 00:02:33.157236 | orchestrator | + security_groups = (known after apply) 2026-03-17 00:02:33.157239 | orchestrator | + stop_before_destroy = false 2026-03-17 00:02:33.157243 | orchestrator | + updated = (known after apply) 2026-03-17 00:02:33.157247 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-17 00:02:33.157251 | orchestrator | 2026-03-17 00:02:33.157254 | orchestrator | + block_device { 2026-03-17 00:02:33.157258 | orchestrator | + boot_index = 0 2026-03-17 00:02:33.157262 | orchestrator | + delete_on_termination = false 2026-03-17 00:02:33.157266 | orchestrator | + destination_type = "volume" 2026-03-17 00:02:33.157269 | orchestrator | + multiattach = false 2026-03-17 00:02:33.157273 | orchestrator | + source_type = "volume" 2026-03-17 00:02:33.157277 | orchestrator | + uuid = (known after apply) 2026-03-17 00:02:33.157281 | orchestrator | } 2026-03-17 00:02:33.157284 | orchestrator | 2026-03-17 00:02:33.157288 | orchestrator | + network { 2026-03-17 00:02:33.157292 | orchestrator | + access_network = false 2026-03-17 00:02:33.157296 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-17 00:02:33.157299 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-17 00:02:33.157303 | orchestrator | + mac = (known after apply) 2026-03-17 00:02:33.157307 | orchestrator | + name = (known after apply) 2026-03-17 00:02:33.157311 | orchestrator | + port = (known after apply) 2026-03-17 00:02:33.157314 | orchestrator | + uuid = (known after apply) 2026-03-17 00:02:33.157318 | orchestrator | } 2026-03-17 00:02:33.157322 | orchestrator | } 2026-03-17 00:02:33.157329 | orchestrator | 2026-03-17 00:02:33.157333 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-03-17 00:02:33.157337 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-17 00:02:33.157341 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-17 00:02:33.157344 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-17 00:02:33.157348 | orchestrator | + all_metadata = (known after apply) 2026-03-17 00:02:33.157352 | orchestrator | + all_tags = (known after apply) 2026-03-17 00:02:33.157356 | orchestrator | + availability_zone = "nova" 2026-03-17 00:02:33.157359 | orchestrator | + config_drive = true 2026-03-17 00:02:33.157363 | orchestrator | + created = (known after apply) 2026-03-17 00:02:33.157367 | orchestrator | + flavor_id = (known after apply) 2026-03-17 00:02:33.157371 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-17 00:02:33.157374 | orchestrator | + force_delete = false 2026-03-17 00:02:33.157380 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-17 00:02:33.157384 | orchestrator | + id = (known after apply) 2026-03-17 00:02:33.157388 | orchestrator | + image_id = (known after apply) 2026-03-17 00:02:33.157392 | orchestrator | + image_name = (known after apply) 2026-03-17 00:02:33.157395 | orchestrator | + key_pair = "testbed" 2026-03-17 00:02:33.157399 | orchestrator | + name = "testbed-node-5" 2026-03-17 00:02:33.157403 | orchestrator | + power_state = "active" 2026-03-17 00:02:33.157407 | orchestrator | + region = (known after apply) 2026-03-17 00:02:33.157410 | orchestrator | + security_groups = (known after apply) 2026-03-17 00:02:33.157414 | orchestrator | + stop_before_destroy = false 2026-03-17 00:02:33.157418 | orchestrator | + updated = (known after apply) 2026-03-17 00:02:33.157422 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-17 00:02:33.157425 | orchestrator | 2026-03-17 00:02:33.157429 | orchestrator | + block_device { 2026-03-17 00:02:33.157433 | orchestrator | + boot_index = 0 2026-03-17 00:02:33.157437 | orchestrator | + delete_on_termination = false 2026-03-17 00:02:33.157440 | orchestrator | + destination_type = "volume" 2026-03-17 00:02:33.157446 | orchestrator | + multiattach = false 2026-03-17 00:02:33.157450 | orchestrator | + source_type = "volume" 2026-03-17 00:02:33.157454 | orchestrator | + uuid = (known after apply) 2026-03-17 00:02:33.157458 | orchestrator | } 2026-03-17 00:02:33.157462 | orchestrator | 2026-03-17 00:02:33.157465 | orchestrator | + network { 2026-03-17 00:02:33.157469 | orchestrator | + access_network = false 2026-03-17 00:02:33.157473 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-17 00:02:33.157477 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-17 00:02:33.157480 | orchestrator | + mac = (known after apply) 2026-03-17 00:02:33.157484 | orchestrator | + name = (known after apply) 2026-03-17 00:02:33.157488 | orchestrator | + port = (known after apply) 2026-03-17 00:02:33.157492 | orchestrator | + uuid = (known after apply) 2026-03-17 00:02:33.157496 | orchestrator | } 2026-03-17 00:02:33.157500 | orchestrator | } 2026-03-17 00:02:33.157503 | orchestrator | 2026-03-17 00:02:33.157507 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-03-17 00:02:33.157511 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-03-17 00:02:33.157515 | orchestrator | + fingerprint = (known after apply) 2026-03-17 00:02:33.157518 | orchestrator | + id = (known after apply) 2026-03-17 00:02:33.157522 | orchestrator | + name = "testbed" 2026-03-17 00:02:33.157526 | orchestrator | + private_key = (sensitive value) 2026-03-17 00:02:33.157530 | orchestrator | + public_key = (known after apply) 2026-03-17 00:02:33.157533 | orchestrator | + region = (known after apply) 2026-03-17 00:02:33.157537 | orchestrator | + user_id = (known after apply) 2026-03-17 00:02:33.157541 | orchestrator | } 2026-03-17 00:02:33.157545 | orchestrator | 2026-03-17 00:02:33.157549 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-03-17 00:02:33.157552 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-17 00:02:33.157559 | orchestrator | + device = (known after apply) 2026-03-17 00:02:33.157563 | orchestrator | + id = (known after apply) 2026-03-17 00:02:33.157567 | orchestrator | + instance_id = (known after apply) 2026-03-17 00:02:33.157571 | orchestrator | + region = (known after apply) 2026-03-17 00:02:33.157574 | orchestrator | + volume_id = (known after apply) 2026-03-17 00:02:33.157578 | orchestrator | } 2026-03-17 00:02:33.157582 | orchestrator | 2026-03-17 00:02:33.157586 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-03-17 00:02:33.157590 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-17 00:02:33.157593 | orchestrator | + device = (known after apply) 2026-03-17 00:02:33.157597 | orchestrator | + id = (known after apply) 2026-03-17 00:02:33.157601 | orchestrator | + instance_id = (known after apply) 2026-03-17 00:02:33.157604 | orchestrator | + region = (known after apply) 2026-03-17 00:02:33.157608 | orchestrator | + volume_id = (known after apply) 2026-03-17 00:02:33.157612 | orchestrator | } 2026-03-17 00:02:33.157616 | orchestrator | 2026-03-17 00:02:33.157619 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-03-17 00:02:33.157623 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-17 00:02:33.157627 | orchestrator | + device = (known after apply) 2026-03-17 00:02:33.157631 | orchestrator | + id = (known after apply) 2026-03-17 00:02:33.157635 | orchestrator | + instance_id = (known after apply) 2026-03-17 00:02:33.157639 | orchestrator | + region = (known after apply) 2026-03-17 00:02:33.157642 | orchestrator | + volume_id = (known after apply) 2026-03-17 00:02:33.157646 | orchestrator | } 2026-03-17 00:02:33.157651 | orchestrator | 2026-03-17 00:02:33.157658 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-03-17 00:02:33.157664 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-17 00:02:33.157669 | orchestrator | + device = (known after apply) 2026-03-17 00:02:33.157675 | orchestrator | + id = (known after apply) 2026-03-17 00:02:33.157681 | orchestrator | + instance_id = (known after apply) 2026-03-17 00:02:33.157686 | orchestrator | + region = (known after apply) 2026-03-17 00:02:33.157692 | orchestrator | + volume_id = (known after apply) 2026-03-17 00:02:33.157697 | orchestrator | } 2026-03-17 00:02:33.157703 | orchestrator | 2026-03-17 00:02:33.157709 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-03-17 00:02:33.157715 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-17 00:02:33.157718 | orchestrator | + device = (known after apply) 2026-03-17 00:02:33.157722 | orchestrator | + id = (known after apply) 2026-03-17 00:02:33.157726 | orchestrator | + instance_id = (known after apply) 2026-03-17 00:02:33.157732 | orchestrator | + region = (known after apply) 2026-03-17 00:02:33.157736 | orchestrator | + volume_id = (known after apply) 2026-03-17 00:02:33.157740 | orchestrator | } 2026-03-17 00:02:33.157743 | orchestrator | 2026-03-17 00:02:33.157747 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-03-17 00:02:33.157751 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-17 00:02:33.157755 | orchestrator | + device = (known after apply) 2026-03-17 00:02:33.157758 | orchestrator | + id = (known after apply) 2026-03-17 00:02:33.157762 | orchestrator | + instance_id = (known after apply) 2026-03-17 00:02:33.157766 | orchestrator | + region = (known after apply) 2026-03-17 00:02:33.157770 | orchestrator | + volume_id = (known after apply) 2026-03-17 00:02:33.157773 | orchestrator | } 2026-03-17 00:02:33.157788 | orchestrator | 2026-03-17 00:02:33.157795 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-03-17 00:02:33.157801 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-17 00:02:33.157806 | orchestrator | + device = (known after apply) 2026-03-17 00:02:33.157810 | orchestrator | + id = (known after apply) 2026-03-17 00:02:33.157814 | orchestrator | + instance_id = (known after apply) 2026-03-17 00:02:33.157818 | orchestrator | + region = (known after apply) 2026-03-17 00:02:33.157825 | orchestrator | + volume_id = (known after apply) 2026-03-17 00:02:33.157829 | orchestrator | } 2026-03-17 00:02:33.157833 | orchestrator | 2026-03-17 00:02:33.157837 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-03-17 00:02:33.157840 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-17 00:02:33.157844 | orchestrator | + device = (known after apply) 2026-03-17 00:02:33.157848 | orchestrator | + id = (known after apply) 2026-03-17 00:02:33.157852 | orchestrator | + instance_id = (known after apply) 2026-03-17 00:02:33.157856 | orchestrator | + region = (known after apply) 2026-03-17 00:02:33.157859 | orchestrator | + volume_id = (known after apply) 2026-03-17 00:02:33.157863 | orchestrator | } 2026-03-17 00:02:33.157867 | orchestrator | 2026-03-17 00:02:33.157871 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-03-17 00:02:33.157874 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-17 00:02:33.157878 | orchestrator | + device = (known after apply) 2026-03-17 00:02:33.157885 | orchestrator | + id = (known after apply) 2026-03-17 00:02:33.157889 | orchestrator | + instance_id = (known after apply) 2026-03-17 00:02:33.157892 | orchestrator | + region = (known after apply) 2026-03-17 00:02:33.157896 | orchestrator | + volume_id = (known after apply) 2026-03-17 00:02:33.157900 | orchestrator | } 2026-03-17 00:02:33.157904 | orchestrator | 2026-03-17 00:02:33.157908 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-03-17 00:02:33.157913 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-03-17 00:02:33.157916 | orchestrator | + fixed_ip = (known after apply) 2026-03-17 00:02:33.157920 | orchestrator | + floating_ip = (known after apply) 2026-03-17 00:02:33.157924 | orchestrator | + id = (known after apply) 2026-03-17 00:02:33.157927 | orchestrator | + port_id = (known after apply) 2026-03-17 00:02:33.157931 | orchestrator | + region = (known after apply) 2026-03-17 00:02:33.157935 | orchestrator | } 2026-03-17 00:02:33.157939 | orchestrator | 2026-03-17 00:02:33.157942 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-03-17 00:02:33.157946 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-03-17 00:02:33.157950 | orchestrator | + address = (known after apply) 2026-03-17 00:02:33.157954 | orchestrator | + all_tags = (known after apply) 2026-03-17 00:02:33.157957 | orchestrator | + dns_domain = (known after apply) 2026-03-17 00:02:33.157961 | orchestrator | + dns_name = (known after apply) 2026-03-17 00:02:33.157965 | orchestrator | + fixed_ip = (known after apply) 2026-03-17 00:02:33.157968 | orchestrator | + id = (known after apply) 2026-03-17 00:02:33.157972 | orchestrator | + pool = "public" 2026-03-17 00:02:33.157976 | orchestrator | + port_id = (known after apply) 2026-03-17 00:02:33.157980 | orchestrator | + region = (known after apply) 2026-03-17 00:02:33.157983 | orchestrator | + subnet_id = (known after apply) 2026-03-17 00:02:33.157987 | orchestrator | + tenant_id = (known after apply) 2026-03-17 00:02:33.157991 | orchestrator | } 2026-03-17 00:02:33.157994 | orchestrator | 2026-03-17 00:02:33.157998 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-03-17 00:02:33.158002 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-03-17 00:02:33.158006 | orchestrator | + admin_state_up = (known after apply) 2026-03-17 00:02:33.158009 | orchestrator | + all_tags = (known after apply) 2026-03-17 00:02:33.158532 | orchestrator | + availability_zone_hints = [ 2026-03-17 00:02:33.158552 | orchestrator | + "nova", 2026-03-17 00:02:33.158557 | orchestrator | ] 2026-03-17 00:02:33.158561 | orchestrator | + dns_domain = (known after apply) 2026-03-17 00:02:33.158565 | orchestrator | + external = (known after apply) 2026-03-17 00:02:33.158569 | orchestrator | + id = (known after apply) 2026-03-17 00:02:33.158573 | orchestrator | + mtu = (known after apply) 2026-03-17 00:02:33.158577 | orchestrator | + name = "net-testbed-management" 2026-03-17 00:02:33.158580 | orchestrator | + port_security_enabled = (known after apply) 2026-03-17 00:02:33.158592 | orchestrator | + qos_policy_id = (known after apply) 2026-03-17 00:02:33.158596 | orchestrator | + region = (known after apply) 2026-03-17 00:02:33.158600 | orchestrator | + shared = (known after apply) 2026-03-17 00:02:33.158604 | orchestrator | + tenant_id = (known after apply) 2026-03-17 00:02:33.158608 | orchestrator | + transparent_vlan = (known after apply) 2026-03-17 00:02:33.158612 | orchestrator | 2026-03-17 00:02:33.158616 | orchestrator | + segments (known after apply) 2026-03-17 00:02:33.158620 | orchestrator | } 2026-03-17 00:02:33.158624 | orchestrator | 2026-03-17 00:02:33.158628 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-03-17 00:02:33.158632 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-03-17 00:02:33.158636 | orchestrator | + admin_state_up = (known after apply) 2026-03-17 00:02:33.158640 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-17 00:02:33.158644 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-17 00:02:33.158653 | orchestrator | + all_tags = (known after apply) 2026-03-17 00:02:33.158657 | orchestrator | + device_id = (known after apply) 2026-03-17 00:02:33.158660 | orchestrator | + device_owner = (known after apply) 2026-03-17 00:02:33.158664 | orchestrator | + dns_assignment = (known after apply) 2026-03-17 00:02:33.158668 | orchestrator | + dns_name = (known after apply) 2026-03-17 00:02:33.158672 | orchestrator | + id = (known after apply) 2026-03-17 00:02:33.158676 | orchestrator | + mac_address = (known after apply) 2026-03-17 00:02:33.158680 | orchestrator | + network_id = (known after apply) 2026-03-17 00:02:33.158684 | orchestrator | + port_security_enabled = (known after apply) 2026-03-17 00:02:33.158687 | orchestrator | + qos_policy_id = (known after apply) 2026-03-17 00:02:33.158691 | orchestrator | + region = (known after apply) 2026-03-17 00:02:33.158695 | orchestrator | + security_group_ids = (known after apply) 2026-03-17 00:02:33.158699 | orchestrator | + tenant_id = (known after apply) 2026-03-17 00:02:33.158702 | orchestrator | 2026-03-17 00:02:33.158706 | orchestrator | + allowed_address_pairs { 2026-03-17 00:02:33.158710 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-17 00:02:33.158714 | orchestrator | } 2026-03-17 00:02:33.158717 | orchestrator | 2026-03-17 00:02:33.158721 | orchestrator | + binding (known after apply) 2026-03-17 00:02:33.158725 | orchestrator | 2026-03-17 00:02:33.158729 | orchestrator | + fixed_ip { 2026-03-17 00:02:33.158733 | orchestrator | + ip_address = "192.168.16.5" 2026-03-17 00:02:33.158736 | orchestrator | + subnet_id = (known after apply) 2026-03-17 00:02:33.158740 | orchestrator | } 2026-03-17 00:02:33.158744 | orchestrator | } 2026-03-17 00:02:33.158748 | orchestrator | 2026-03-17 00:02:33.158751 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-03-17 00:02:33.158755 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-17 00:02:33.158759 | orchestrator | + admin_state_up = (known after apply) 2026-03-17 00:02:33.158763 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-17 00:02:33.158766 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-17 00:02:33.158770 | orchestrator | + all_tags = (known after apply) 2026-03-17 00:02:33.158774 | orchestrator | + device_id = (known after apply) 2026-03-17 00:02:33.158813 | orchestrator | + device_owner = (known after apply) 2026-03-17 00:02:33.158818 | orchestrator | + dns_assignment = (known after apply) 2026-03-17 00:02:33.158822 | orchestrator | + dns_name = (known after apply) 2026-03-17 00:02:33.158826 | orchestrator | + id = (known after apply) 2026-03-17 00:02:33.158829 | orchestrator | + mac_address = (known after apply) 2026-03-17 00:02:33.158833 | orchestrator | + network_id = (known after apply) 2026-03-17 00:02:33.158848 | orchestrator | + port_security_enabled = (known after apply) 2026-03-17 00:02:33.158852 | orchestrator | + qos_policy_id = (known after apply) 2026-03-17 00:02:33.158856 | orchestrator | + region = (known after apply) 2026-03-17 00:02:33.158864 | orchestrator | + security_group_ids = (known after apply) 2026-03-17 00:02:33.158867 | orchestrator | + tenant_id = (known after apply) 2026-03-17 00:02:33.158871 | orchestrator | 2026-03-17 00:02:33.158875 | orchestrator | + allowed_address_pairs { 2026-03-17 00:02:33.158879 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-17 00:02:33.158883 | orchestrator | } 2026-03-17 00:02:33.158886 | orchestrator | + allowed_address_pairs { 2026-03-17 00:02:33.158890 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-17 00:02:33.158894 | orchestrator | } 2026-03-17 00:02:33.158898 | orchestrator | + allowed_address_pairs { 2026-03-17 00:02:33.158901 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-17 00:02:33.158905 | orchestrator | } 2026-03-17 00:02:33.158909 | orchestrator | 2026-03-17 00:02:33.158913 | orchestrator | + binding (known after apply) 2026-03-17 00:02:33.158916 | orchestrator | 2026-03-17 00:02:33.158920 | orchestrator | + fixed_ip { 2026-03-17 00:02:33.158924 | orchestrator | + ip_address = "192.168.16.10" 2026-03-17 00:02:33.158928 | orchestrator | + subnet_id = (known after apply) 2026-03-17 00:02:33.158931 | orchestrator | } 2026-03-17 00:02:33.158935 | orchestrator | } 2026-03-17 00:02:33.158939 | orchestrator | 2026-03-17 00:02:33.158943 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-03-17 00:02:33.158947 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-17 00:02:33.158951 | orchestrator | + admin_state_up = (known after apply) 2026-03-17 00:02:33.158955 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-17 00:02:33.158958 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-17 00:02:33.158962 | orchestrator | + all_tags = (known after apply) 2026-03-17 00:02:33.158966 | orchestrator | + device_id = (known after apply) 2026-03-17 00:02:33.158970 | orchestrator | + device_owner = (known after apply) 2026-03-17 00:02:33.158974 | orchestrator | + dns_assignment = (known after apply) 2026-03-17 00:02:33.158977 | orchestrator | + dns_name = (known after apply) 2026-03-17 00:02:33.158981 | orchestrator | + id = (known after apply) 2026-03-17 00:02:33.158985 | orchestrator | + mac_address = (known after apply) 2026-03-17 00:02:33.158989 | orchestrator | + network_id = (known after apply) 2026-03-17 00:02:33.158992 | orchestrator | + port_security_enabled = (known after apply) 2026-03-17 00:02:33.158996 | orchestrator | + qos_policy_id = (known after apply) 2026-03-17 00:02:33.159000 | orchestrator | + region = (known after apply) 2026-03-17 00:02:33.159004 | orchestrator | + security_group_ids = (known after apply) 2026-03-17 00:02:33.159007 | orchestrator | + tenant_id = (known after apply) 2026-03-17 00:02:33.159011 | orchestrator | 2026-03-17 00:02:33.159015 | orchestrator | + allowed_address_pairs { 2026-03-17 00:02:33.159019 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-17 00:02:33.159022 | orchestrator | } 2026-03-17 00:02:33.159026 | orchestrator | + allowed_address_pairs { 2026-03-17 00:02:33.159030 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-17 00:02:33.159034 | orchestrator | } 2026-03-17 00:02:33.159037 | orchestrator | + allowed_address_pairs { 2026-03-17 00:02:33.159041 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-17 00:02:33.159045 | orchestrator | } 2026-03-17 00:02:33.159049 | orchestrator | 2026-03-17 00:02:33.159053 | orchestrator | + binding (known after apply) 2026-03-17 00:02:33.159056 | orchestrator | 2026-03-17 00:02:33.159060 | orchestrator | + fixed_ip { 2026-03-17 00:02:33.159064 | orchestrator | + ip_address = "192.168.16.11" 2026-03-17 00:02:33.159068 | orchestrator | + subnet_id = (known after apply) 2026-03-17 00:02:33.159071 | orchestrator | } 2026-03-17 00:02:33.159075 | orchestrator | } 2026-03-17 00:02:33.159079 | orchestrator | 2026-03-17 00:02:33.159083 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-03-17 00:02:33.159087 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-17 00:02:33.159090 | orchestrator | + admin_state_up = (known after apply) 2026-03-17 00:02:33.159094 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-17 00:02:33.159098 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-17 00:02:33.159102 | orchestrator | + all_tags = (known after apply) 2026-03-17 00:02:33.159111 | orchestrator | + device_id = (known after apply) 2026-03-17 00:02:33.159115 | orchestrator | + device_owner = (known after apply) 2026-03-17 00:02:33.159119 | orchestrator | + dns_assignment = (known after apply) 2026-03-17 00:02:33.159122 | orchestrator | + dns_name = (known after apply) 2026-03-17 00:02:33.159129 | orchestrator | + id = (known after apply) 2026-03-17 00:02:33.159133 | orchestrator | + mac_address = (known after apply) 2026-03-17 00:02:33.159136 | orchestrator | + network_id = (known after apply) 2026-03-17 00:02:33.159140 | orchestrator | + port_security_enabled = (known after apply) 2026-03-17 00:02:33.159144 | orchestrator | + qos_policy_id = (known after apply) 2026-03-17 00:02:33.159148 | orchestrator | + region = (known after apply) 2026-03-17 00:02:33.159151 | orchestrator | + security_group_ids = (known after apply) 2026-03-17 00:02:33.159155 | orchestrator | + tenant_id = (known after apply) 2026-03-17 00:02:33.159159 | orchestrator | 2026-03-17 00:02:33.159163 | orchestrator | + allowed_address_pairs { 2026-03-17 00:02:33.159166 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-17 00:02:33.159170 | orchestrator | } 2026-03-17 00:02:33.159174 | orchestrator | + allowed_address_pairs { 2026-03-17 00:02:33.159178 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-17 00:02:33.159181 | orchestrator | } 2026-03-17 00:02:33.159185 | orchestrator | + allowed_address_pairs { 2026-03-17 00:02:33.159189 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-17 00:02:33.159193 | orchestrator | } 2026-03-17 00:02:33.159197 | orchestrator | 2026-03-17 00:02:33.159200 | orchestrator | + binding (known after apply) 2026-03-17 00:02:33.159204 | orchestrator | 2026-03-17 00:02:33.159208 | orchestrator | + fixed_ip { 2026-03-17 00:02:33.159212 | orchestrator | + ip_address = "192.168.16.12" 2026-03-17 00:02:33.159215 | orchestrator | + subnet_id = (known after apply) 2026-03-17 00:02:33.159219 | orchestrator | } 2026-03-17 00:02:33.159223 | orchestrator | } 2026-03-17 00:02:33.159227 | orchestrator | 2026-03-17 00:02:33.159231 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-03-17 00:02:33.159234 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-17 00:02:33.159238 | orchestrator | + admin_state_up = (known after apply) 2026-03-17 00:02:33.159242 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-17 00:02:33.159246 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-17 00:02:33.159249 | orchestrator | + all_tags = (known after apply) 2026-03-17 00:02:33.159253 | orchestrator | + device_id = (known after apply) 2026-03-17 00:02:33.159257 | orchestrator | + device_owner = (known after apply) 2026-03-17 00:02:33.159261 | orchestrator | + dns_assignment = (known after apply) 2026-03-17 00:02:33.159265 | orchestrator | + dns_name = (known after apply) 2026-03-17 00:02:33.159271 | orchestrator | + id = (known after apply) 2026-03-17 00:02:33.159275 | orchestrator | + mac_address = (known after apply) 2026-03-17 00:02:33.159279 | orchestrator | + network_id = (known after apply) 2026-03-17 00:02:33.159283 | orchestrator | + port_security_enabled = (known after apply) 2026-03-17 00:02:33.159286 | orchestrator | + qos_policy_id = (known after apply) 2026-03-17 00:02:33.159290 | orchestrator | + region = (known after apply) 2026-03-17 00:02:33.159294 | orchestrator | + security_group_ids = (known after apply) 2026-03-17 00:02:33.159298 | orchestrator | + tenant_id = (known after apply) 2026-03-17 00:02:33.159301 | orchestrator | 2026-03-17 00:02:33.159305 | orchestrator | + allowed_address_pairs { 2026-03-17 00:02:33.159309 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-17 00:02:33.159313 | orchestrator | } 2026-03-17 00:02:33.159317 | orchestrator | + allowed_address_pairs { 2026-03-17 00:02:33.159321 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-17 00:02:33.159325 | orchestrator | } 2026-03-17 00:02:33.159328 | orchestrator | + allowed_address_pairs { 2026-03-17 00:02:33.159332 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-17 00:02:33.159336 | orchestrator | } 2026-03-17 00:02:33.159340 | orchestrator | 2026-03-17 00:02:33.159348 | orchestrator | + binding (known after apply) 2026-03-17 00:02:33.159352 | orchestrator | 2026-03-17 00:02:33.159355 | orchestrator | + fixed_ip { 2026-03-17 00:02:33.159359 | orchestrator | + ip_address = "192.168.16.13" 2026-03-17 00:02:33.159363 | orchestrator | + subnet_id = (known after apply) 2026-03-17 00:02:33.159366 | orchestrator | } 2026-03-17 00:02:33.159370 | orchestrator | } 2026-03-17 00:02:33.159374 | orchestrator | 2026-03-17 00:02:33.159378 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-03-17 00:02:33.159382 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-17 00:02:33.159386 | orchestrator | + admin_state_up = (known after apply) 2026-03-17 00:02:33.159389 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-17 00:02:33.159393 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-17 00:02:33.159397 | orchestrator | + all_tags = (known after apply) 2026-03-17 00:02:33.159401 | orchestrator | + device_id = (known after apply) 2026-03-17 00:02:33.159404 | orchestrator | + device_owner = (known after apply) 2026-03-17 00:02:33.159409 | orchestrator | + dns_assignment = (known after apply) 2026-03-17 00:02:33.159415 | orchestrator | + dns_name = (known after apply) 2026-03-17 00:02:33.159420 | orchestrator | + id = (known after apply) 2026-03-17 00:02:33.159426 | orchestrator | + mac_address = (known after apply) 2026-03-17 00:02:33.159432 | orchestrator | + network_id = (known after apply) 2026-03-17 00:02:33.159438 | orchestrator | + port_security_enabled = (known after apply) 2026-03-17 00:02:33.159443 | orchestrator | + qos_policy_id = (known after apply) 2026-03-17 00:02:33.159449 | orchestrator | + region = (known after apply) 2026-03-17 00:02:33.159455 | orchestrator | + security_group_ids = (known after apply) 2026-03-17 00:02:33.159459 | orchestrator | + tenant_id = (known after apply) 2026-03-17 00:02:33.159465 | orchestrator | 2026-03-17 00:02:33.159469 | orchestrator | + allowed_address_pairs { 2026-03-17 00:02:33.159473 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-17 00:02:33.159476 | orchestrator | } 2026-03-17 00:02:33.159480 | orchestrator | + allowed_address_pairs { 2026-03-17 00:02:33.159484 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-17 00:02:33.159488 | orchestrator | } 2026-03-17 00:02:33.159492 | orchestrator | + allowed_address_pairs { 2026-03-17 00:02:33.159495 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-17 00:02:33.159499 | orchestrator | } 2026-03-17 00:02:33.159503 | orchestrator | 2026-03-17 00:02:33.159507 | orchestrator | + binding (known after apply) 2026-03-17 00:02:33.159510 | orchestrator | 2026-03-17 00:02:33.159514 | orchestrator | + fixed_ip { 2026-03-17 00:02:33.159518 | orchestrator | + ip_address = "192.168.16.14" 2026-03-17 00:02:33.159522 | orchestrator | + subnet_id = (known after apply) 2026-03-17 00:02:33.159525 | orchestrator | } 2026-03-17 00:02:33.159529 | orchestrator | } 2026-03-17 00:02:33.159533 | orchestrator | 2026-03-17 00:02:33.159537 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-03-17 00:02:33.159541 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-17 00:02:33.159544 | orchestrator | + admin_state_up = (known after apply) 2026-03-17 00:02:33.159548 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-17 00:02:33.159552 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-17 00:02:33.159556 | orchestrator | + all_tags = (known after apply) 2026-03-17 00:02:33.159559 | orchestrator | + device_id = (known after apply) 2026-03-17 00:02:33.159563 | orchestrator | + device_owner = (known after apply) 2026-03-17 00:02:33.159567 | orchestrator | + dns_assignment = (known after apply) 2026-03-17 00:02:33.159570 | orchestrator | + dns_name = (known after apply) 2026-03-17 00:02:33.159574 | orchestrator | + id = (known after apply) 2026-03-17 00:02:33.159578 | orchestrator | + mac_address = (known after apply) 2026-03-17 00:02:33.159582 | orchestrator | + network_id = (known after apply) 2026-03-17 00:02:33.159585 | orchestrator | + port_security_enabled = (known after apply) 2026-03-17 00:02:33.159589 | orchestrator | + qos_policy_id = (known after apply) 2026-03-17 00:02:33.159596 | orchestrator | + region = (known after apply) 2026-03-17 00:02:33.159600 | orchestrator | + security_group_ids = (known after apply) 2026-03-17 00:02:33.159603 | orchestrator | + tenant_id = (known after apply) 2026-03-17 00:02:33.159607 | orchestrator | 2026-03-17 00:02:33.159611 | orchestrator | + allowed_address_pairs { 2026-03-17 00:02:33.159615 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-17 00:02:33.159618 | orchestrator | } 2026-03-17 00:02:33.159622 | orchestrator | + allowed_address_pairs { 2026-03-17 00:02:33.159626 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-17 00:02:33.159630 | orchestrator | } 2026-03-17 00:02:33.159634 | orchestrator | + allowed_address_pairs { 2026-03-17 00:02:33.159638 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-17 00:02:33.159641 | orchestrator | } 2026-03-17 00:02:33.159645 | orchestrator | 2026-03-17 00:02:33.159652 | orchestrator | + binding (known after apply) 2026-03-17 00:02:33.159656 | orchestrator | 2026-03-17 00:02:33.159660 | orchestrator | + fixed_ip { 2026-03-17 00:02:33.159663 | orchestrator | + ip_address = "192.168.16.15" 2026-03-17 00:02:33.159667 | orchestrator | + subnet_id = (known after apply) 2026-03-17 00:02:33.159671 | orchestrator | } 2026-03-17 00:02:33.159675 | orchestrator | } 2026-03-17 00:02:33.159679 | orchestrator | 2026-03-17 00:02:33.159683 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-03-17 00:02:33.159686 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-03-17 00:02:33.159690 | orchestrator | + force_destroy = false 2026-03-17 00:02:33.159694 | orchestrator | + id = (known after apply) 2026-03-17 00:02:33.159698 | orchestrator | + port_id = (known after apply) 2026-03-17 00:02:33.159702 | orchestrator | + region = (known after apply) 2026-03-17 00:02:33.159705 | orchestrator | + router_id = (known after apply) 2026-03-17 00:02:33.159709 | orchestrator | + subnet_id = (known after apply) 2026-03-17 00:02:33.159713 | orchestrator | } 2026-03-17 00:02:33.159717 | orchestrator | 2026-03-17 00:02:33.159720 | orchestrator | # openstack_networking_router_v2.router will be created 2026-03-17 00:02:33.159727 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-03-17 00:02:33.159731 | orchestrator | + admin_state_up = (known after apply) 2026-03-17 00:02:33.159735 | orchestrator | + all_tags = (known after apply) 2026-03-17 00:02:33.159738 | orchestrator | + availability_zone_hints = [ 2026-03-17 00:02:33.159742 | orchestrator | + "nova", 2026-03-17 00:02:33.159746 | orchestrator | ] 2026-03-17 00:02:33.159750 | orchestrator | + distributed = (known after apply) 2026-03-17 00:02:33.159754 | orchestrator | + enable_snat = (known after apply) 2026-03-17 00:02:33.159757 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-03-17 00:02:33.159761 | orchestrator | + external_qos_policy_id = (known after apply) 2026-03-17 00:02:33.159765 | orchestrator | + id = (known after apply) 2026-03-17 00:02:33.159768 | orchestrator | + name = "testbed" 2026-03-17 00:02:33.159772 | orchestrator | + region = (known after apply) 2026-03-17 00:02:33.159776 | orchestrator | + tenant_id = (known after apply) 2026-03-17 00:02:33.159792 | orchestrator | 2026-03-17 00:02:33.159798 | orchestrator | + external_fixed_ip (known after apply) 2026-03-17 00:02:33.159804 | orchestrator | } 2026-03-17 00:02:33.159810 | orchestrator | 2026-03-17 00:02:33.159816 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-03-17 00:02:33.159822 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-03-17 00:02:33.159827 | orchestrator | + description = "ssh" 2026-03-17 00:02:33.159832 | orchestrator | + direction = "ingress" 2026-03-17 00:02:33.159837 | orchestrator | + ethertype = "IPv4" 2026-03-17 00:02:33.159842 | orchestrator | + id = (known after apply) 2026-03-17 00:02:33.159848 | orchestrator | + port_range_max = 22 2026-03-17 00:02:33.159853 | orchestrator | + port_range_min = 22 2026-03-17 00:02:33.159859 | orchestrator | + protocol = "tcp" 2026-03-17 00:02:33.159864 | orchestrator | + region = (known after apply) 2026-03-17 00:02:33.159874 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-17 00:02:33.159880 | orchestrator | + remote_group_id = (known after apply) 2026-03-17 00:02:33.159885 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-17 00:02:33.159890 | orchestrator | + security_group_id = (known after apply) 2026-03-17 00:02:33.159895 | orchestrator | + tenant_id = (known after apply) 2026-03-17 00:02:33.159900 | orchestrator | } 2026-03-17 00:02:33.159905 | orchestrator | 2026-03-17 00:02:33.159911 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-03-17 00:02:33.159917 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-03-17 00:02:33.159922 | orchestrator | + description = "wireguard" 2026-03-17 00:02:33.159928 | orchestrator | + direction = "ingress" 2026-03-17 00:02:33.159933 | orchestrator | + ethertype = "IPv4" 2026-03-17 00:02:33.159939 | orchestrator | + id = (known after apply) 2026-03-17 00:02:33.159945 | orchestrator | + port_range_max = 51820 2026-03-17 00:02:33.159950 | orchestrator | + port_range_min = 51820 2026-03-17 00:02:33.159955 | orchestrator | + protocol = "udp" 2026-03-17 00:02:33.159961 | orchestrator | + region = (known after apply) 2026-03-17 00:02:33.159966 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-17 00:02:33.159972 | orchestrator | + remote_group_id = (known after apply) 2026-03-17 00:02:33.159977 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-17 00:02:33.159983 | orchestrator | + security_group_id = (known after apply) 2026-03-17 00:02:33.159989 | orchestrator | + tenant_id = (known after apply) 2026-03-17 00:02:33.159995 | orchestrator | } 2026-03-17 00:02:33.160001 | orchestrator | 2026-03-17 00:02:33.160007 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-03-17 00:02:33.160013 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-03-17 00:02:33.160018 | orchestrator | + direction = "ingress" 2026-03-17 00:02:33.160025 | orchestrator | + ethertype = "IPv4" 2026-03-17 00:02:33.160031 | orchestrator | + id = (known after apply) 2026-03-17 00:02:33.160037 | orchestrator | + protocol = "tcp" 2026-03-17 00:02:33.160042 | orchestrator | + region = (known after apply) 2026-03-17 00:02:33.160049 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-17 00:02:33.160055 | orchestrator | + remote_group_id = (known after apply) 2026-03-17 00:02:33.160061 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-03-17 00:02:33.160068 | orchestrator | + security_group_id = (known after apply) 2026-03-17 00:02:33.160073 | orchestrator | + tenant_id = (known after apply) 2026-03-17 00:02:33.160076 | orchestrator | } 2026-03-17 00:02:33.160080 | orchestrator | 2026-03-17 00:02:33.160086 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-03-17 00:02:33.160092 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-03-17 00:02:33.160098 | orchestrator | + direction = "ingress" 2026-03-17 00:02:33.160104 | orchestrator | + ethertype = "IPv4" 2026-03-17 00:02:33.160110 | orchestrator | + id = (known after apply) 2026-03-17 00:02:33.160116 | orchestrator | + protocol = "udp" 2026-03-17 00:02:33.160119 | orchestrator | + region = (known after apply) 2026-03-17 00:02:33.160123 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-17 00:02:33.160128 | orchestrator | + remote_group_id = (known after apply) 2026-03-17 00:02:33.160132 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-03-17 00:02:33.160135 | orchestrator | + security_group_id = (known after apply) 2026-03-17 00:02:33.160139 | orchestrator | + tenant_id = (known after apply) 2026-03-17 00:02:33.160143 | orchestrator | } 2026-03-17 00:02:33.160146 | orchestrator | 2026-03-17 00:02:33.160150 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-03-17 00:02:33.160160 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-03-17 00:02:33.160164 | orchestrator | + direction = "ingress" 2026-03-17 00:02:33.160167 | orchestrator | + ethertype = "IPv4" 2026-03-17 00:02:33.160171 | orchestrator | + id = (known after apply) 2026-03-17 00:02:33.160175 | orchestrator | + protocol = "icmp" 2026-03-17 00:02:33.160184 | orchestrator | + region = (known after apply) 2026-03-17 00:02:33.160189 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-17 00:02:33.160192 | orchestrator | + remote_group_id = (known after apply) 2026-03-17 00:02:33.160196 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-17 00:02:33.160200 | orchestrator | + security_group_id = (known after apply) 2026-03-17 00:02:33.160204 | orchestrator | + tenant_id = (known after apply) 2026-03-17 00:02:33.160208 | orchestrator | } 2026-03-17 00:02:33.160212 | orchestrator | 2026-03-17 00:02:33.160215 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-03-17 00:02:33.160219 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-03-17 00:02:33.160223 | orchestrator | + direction = "ingress" 2026-03-17 00:02:33.160227 | orchestrator | + ethertype = "IPv4" 2026-03-17 00:02:33.160231 | orchestrator | + id = (known after apply) 2026-03-17 00:02:33.160235 | orchestrator | + protocol = "tcp" 2026-03-17 00:02:33.160239 | orchestrator | + region = (known after apply) 2026-03-17 00:02:33.160243 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-17 00:02:33.160251 | orchestrator | + remote_group_id = (known after apply) 2026-03-17 00:02:33.160255 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-17 00:02:33.160259 | orchestrator | + security_group_id = (known after apply) 2026-03-17 00:02:33.160262 | orchestrator | + tenant_id = (known after apply) 2026-03-17 00:02:33.160266 | orchestrator | } 2026-03-17 00:02:33.160270 | orchestrator | 2026-03-17 00:02:33.160274 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-03-17 00:02:33.160278 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-03-17 00:02:33.160282 | orchestrator | + direction = "ingress" 2026-03-17 00:02:33.160286 | orchestrator | + ethertype = "IPv4" 2026-03-17 00:02:33.160289 | orchestrator | + id = (known after apply) 2026-03-17 00:02:33.160294 | orchestrator | + protocol = "udp" 2026-03-17 00:02:33.160297 | orchestrator | + region = (known after apply) 2026-03-17 00:02:33.160301 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-17 00:02:33.160305 | orchestrator | + remote_group_id = (known after apply) 2026-03-17 00:02:33.160309 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-17 00:02:33.160313 | orchestrator | + security_group_id = (known after apply) 2026-03-17 00:02:33.160317 | orchestrator | + tenant_id = (known after apply) 2026-03-17 00:02:33.160321 | orchestrator | } 2026-03-17 00:02:33.160325 | orchestrator | 2026-03-17 00:02:33.160328 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-03-17 00:02:33.160332 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-03-17 00:02:33.160336 | orchestrator | + direction = "ingress" 2026-03-17 00:02:33.160343 | orchestrator | + ethertype = "IPv4" 2026-03-17 00:02:33.160347 | orchestrator | + id = (known after apply) 2026-03-17 00:02:33.160351 | orchestrator | + protocol = "icmp" 2026-03-17 00:02:33.160355 | orchestrator | + region = (known after apply) 2026-03-17 00:02:33.160359 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-17 00:02:33.160363 | orchestrator | + remote_group_id = (known after apply) 2026-03-17 00:02:33.160366 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-17 00:02:33.160370 | orchestrator | + security_group_id = (known after apply) 2026-03-17 00:02:33.160374 | orchestrator | + tenant_id = (known after apply) 2026-03-17 00:02:33.160381 | orchestrator | } 2026-03-17 00:02:33.160386 | orchestrator | 2026-03-17 00:02:33.160390 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-03-17 00:02:33.160394 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-03-17 00:02:33.160398 | orchestrator | + description = "vrrp" 2026-03-17 00:02:33.160402 | orchestrator | + direction = "ingress" 2026-03-17 00:02:33.160405 | orchestrator | + ethertype = "IPv4" 2026-03-17 00:02:33.160409 | orchestrator | + id = (known after apply) 2026-03-17 00:02:33.160413 | orchestrator | + protocol = "112" 2026-03-17 00:02:33.160417 | orchestrator | + region = (known after apply) 2026-03-17 00:02:33.160421 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-17 00:02:33.160425 | orchestrator | + remote_group_id = (known after apply) 2026-03-17 00:02:33.160429 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-17 00:02:33.160432 | orchestrator | + security_group_id = (known after apply) 2026-03-17 00:02:33.160436 | orchestrator | + tenant_id = (known after apply) 2026-03-17 00:02:33.160440 | orchestrator | } 2026-03-17 00:02:33.160444 | orchestrator | 2026-03-17 00:02:33.160447 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-03-17 00:02:33.160452 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-03-17 00:02:33.160455 | orchestrator | + all_tags = (known after apply) 2026-03-17 00:02:33.160459 | orchestrator | + description = "management security group" 2026-03-17 00:02:33.160463 | orchestrator | + id = (known after apply) 2026-03-17 00:02:33.160467 | orchestrator | + name = "testbed-management" 2026-03-17 00:02:33.160470 | orchestrator | + region = (known after apply) 2026-03-17 00:02:33.160474 | orchestrator | + stateful = (known after apply) 2026-03-17 00:02:33.160478 | orchestrator | + tenant_id = (known after apply) 2026-03-17 00:02:33.160482 | orchestrator | } 2026-03-17 00:02:33.160485 | orchestrator | 2026-03-17 00:02:33.160489 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-03-17 00:02:33.160493 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-03-17 00:02:33.160497 | orchestrator | + all_tags = (known after apply) 2026-03-17 00:02:33.160501 | orchestrator | + description = "node security group" 2026-03-17 00:02:33.160504 | orchestrator | + id = (known after apply) 2026-03-17 00:02:33.160508 | orchestrator | + name = "testbed-node" 2026-03-17 00:02:33.160512 | orchestrator | + region = (known after apply) 2026-03-17 00:02:33.160516 | orchestrator | + stateful = (known after apply) 2026-03-17 00:02:33.160520 | orchestrator | + tenant_id = (known after apply) 2026-03-17 00:02:33.160524 | orchestrator | } 2026-03-17 00:02:33.160528 | orchestrator | 2026-03-17 00:02:33.160532 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-03-17 00:02:33.160535 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-03-17 00:02:33.160542 | orchestrator | + all_tags = (known after apply) 2026-03-17 00:02:33.160546 | orchestrator | + cidr = "192.168.16.0/20" 2026-03-17 00:02:33.160550 | orchestrator | + dns_nameservers = [ 2026-03-17 00:02:33.160555 | orchestrator | + "8.8.8.8", 2026-03-17 00:02:33.160559 | orchestrator | + "9.9.9.9", 2026-03-17 00:02:33.160563 | orchestrator | ] 2026-03-17 00:02:33.160567 | orchestrator | + enable_dhcp = true 2026-03-17 00:02:33.160571 | orchestrator | + gateway_ip = (known after apply) 2026-03-17 00:02:33.160575 | orchestrator | + id = (known after apply) 2026-03-17 00:02:33.160579 | orchestrator | + ip_version = 4 2026-03-17 00:02:33.160583 | orchestrator | + ipv6_address_mode = (known after apply) 2026-03-17 00:02:33.160586 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-03-17 00:02:33.160590 | orchestrator | + name = "subnet-testbed-management" 2026-03-17 00:02:33.160594 | orchestrator | + network_id = (known after apply) 2026-03-17 00:02:33.160598 | orchestrator | + no_gateway = false 2026-03-17 00:02:33.160602 | orchestrator | + region = (known after apply) 2026-03-17 00:02:33.160606 | orchestrator | + service_types = (known after apply) 2026-03-17 00:02:33.160614 | orchestrator | + tenant_id = (known after apply) 2026-03-17 00:02:33.160618 | orchestrator | 2026-03-17 00:02:33.160622 | orchestrator | + allocation_pool { 2026-03-17 00:02:33.160626 | orchestrator | + end = "192.168.31.250" 2026-03-17 00:02:33.160629 | orchestrator | + start = "192.168.31.200" 2026-03-17 00:02:33.160633 | orchestrator | } 2026-03-17 00:02:33.160637 | orchestrator | } 2026-03-17 00:02:33.160641 | orchestrator | 2026-03-17 00:02:33.160645 | orchestrator | # terraform_data.image will be created 2026-03-17 00:02:33.160649 | orchestrator | + resource "terraform_data" "image" { 2026-03-17 00:02:33.160653 | orchestrator | + id = (known after apply) 2026-03-17 00:02:33.160657 | orchestrator | + input = "Ubuntu 24.04" 2026-03-17 00:02:33.160660 | orchestrator | + output = (known after apply) 2026-03-17 00:02:33.160664 | orchestrator | } 2026-03-17 00:02:33.160668 | orchestrator | 2026-03-17 00:02:33.160672 | orchestrator | # terraform_data.image_node will be created 2026-03-17 00:02:33.160676 | orchestrator | + resource "terraform_data" "image_node" { 2026-03-17 00:02:33.160680 | orchestrator | + id = (known after apply) 2026-03-17 00:02:33.160683 | orchestrator | + input = "Ubuntu 24.04" 2026-03-17 00:02:33.160687 | orchestrator | + output = (known after apply) 2026-03-17 00:02:33.160691 | orchestrator | } 2026-03-17 00:02:33.160695 | orchestrator | 2026-03-17 00:02:33.160699 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-03-17 00:02:33.160703 | orchestrator | 2026-03-17 00:02:33.160706 | orchestrator | Changes to Outputs: 2026-03-17 00:02:33.160711 | orchestrator | + manager_address = (sensitive value) 2026-03-17 00:02:33.160715 | orchestrator | + private_key = (sensitive value) 2026-03-17 00:02:34.702087 | orchestrator | terraform_data.image_node: Creating... 2026-03-17 00:02:34.702357 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=bd842788-1559-6bd0-4010-2c8b349b789a] 2026-03-17 00:02:34.706983 | orchestrator | terraform_data.image: Creating... 2026-03-17 00:02:34.710079 | orchestrator | terraform_data.image: Creation complete after 0s [id=5a6caed8-864f-1897-cae8-81d2f6f600d5] 2026-03-17 00:02:34.725244 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-03-17 00:02:34.731993 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-03-17 00:02:34.732046 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-03-17 00:02:34.745458 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-03-17 00:02:34.745520 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-03-17 00:02:34.745533 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-03-17 00:02:34.756157 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-03-17 00:02:34.756213 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-03-17 00:02:34.757678 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-03-17 00:02:34.765868 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-03-17 00:02:35.174335 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-03-17 00:02:35.180523 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-03-17 00:02:35.187541 | orchestrator | data.openstack_images_image_v2.image: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-03-17 00:02:35.200354 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-03-17 00:02:35.217604 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2026-03-17 00:02:35.221277 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-03-17 00:02:35.723568 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 1s [id=659b959c-5607-40ec-a83f-a2f2e5b0e8fc] 2026-03-17 00:02:35.733674 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-03-17 00:02:38.344624 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 3s [id=f65971dd-3d8e-4ccb-8892-9cef1457b08b] 2026-03-17 00:02:38.349181 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-03-17 00:02:38.354482 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 3s [id=89f9da0d-6b93-4417-9f39-e48f14dc47e8] 2026-03-17 00:02:38.360325 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-03-17 00:02:38.366007 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 3s [id=304f2e06-033e-4696-8bcf-5d7e9425b0ee] 2026-03-17 00:02:38.373370 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-03-17 00:02:38.381722 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 3s [id=fe0d5661-edac-468e-9d1d-014c3e419a65] 2026-03-17 00:02:38.383834 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 3s [id=d33e80f7-c5e3-468e-989c-76b1c28adee9] 2026-03-17 00:02:38.391711 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-03-17 00:02:38.392422 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-03-17 00:02:38.394614 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 3s [id=8140ca94-7747-4c81-b89b-0d83b2f23451] 2026-03-17 00:02:38.398700 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-03-17 00:02:38.446190 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 3s [id=a8e3ed1c-2f99-41d3-ad10-61535a4cd08c] 2026-03-17 00:02:38.446298 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 3s [id=1afbae95-f964-4c90-9c71-9e7629ff9c63] 2026-03-17 00:02:38.465075 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-03-17 00:02:38.467982 | orchestrator | local_file.id_rsa_pub: Creating... 2026-03-17 00:02:38.468089 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 3s [id=83f9c1ee-a593-4773-9f19-cdbbc5179b15] 2026-03-17 00:02:38.476775 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-03-17 00:02:38.478431 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=1dff0caec8f1785d79d0979e7ae0d8a7391c1a89] 2026-03-17 00:02:38.479864 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=fa834595eb5e0fa2b78fe39996ef507eae29967b] 2026-03-17 00:02:39.077374 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 3s [id=2ad57a79-f08d-4fb4-9f95-65bfce46ba5e] 2026-03-17 00:02:39.420937 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=2ac83161-f87a-45d2-844b-673207aa8eac] 2026-03-17 00:02:39.988579 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-03-17 00:02:41.789314 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 4s [id=6e08813e-a36b-44d4-8c45-37d944b877b3] 2026-03-17 00:02:41.795223 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 4s [id=4c015fd1-9d9e-4b8e-bc57-ed6f41b9880a] 2026-03-17 00:02:41.820673 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 4s [id=180a1b57-eb5e-4d25-a5e5-abe1a8d1b622] 2026-03-17 00:02:41.833606 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 4s [id=fef12aab-9308-4371-8ae3-fd48e064f393] 2026-03-17 00:02:41.836523 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 4s [id=25c1cf14-bcc6-40f1-b574-7a0e4cd20ee4] 2026-03-17 00:02:41.864749 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 4s [id=dfd38aa9-0273-4d0b-842d-83e2b920901d] 2026-03-17 00:02:42.829057 | orchestrator | openstack_networking_router_v2.router: Creation complete after 4s [id=779fef59-9662-45d9-aca3-b3b1260ce3f3] 2026-03-17 00:02:42.834481 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-03-17 00:02:42.836069 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-03-17 00:02:42.840079 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-03-17 00:02:43.038148 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=74ab20ec-3348-47b6-81be-479701867fc7] 2026-03-17 00:02:43.049159 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-03-17 00:02:43.051418 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-03-17 00:02:43.052195 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-03-17 00:02:43.052500 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-03-17 00:02:43.055156 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-03-17 00:02:43.055704 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-03-17 00:02:43.065802 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-03-17 00:02:43.072776 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-03-17 00:02:43.248758 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=f80f4bbb-8865-43e8-8b35-3dfb3b19cba9] 2026-03-17 00:02:43.264055 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-03-17 00:02:43.688824 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=05089f8f-8400-432c-9943-4263fb19966d] 2026-03-17 00:02:43.706085 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-03-17 00:02:43.875302 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=0dae88d0-efd4-416a-8db4-3fb34fdfd14a] 2026-03-17 00:02:43.878573 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=b84ac277-0fac-4abd-96e6-f5dca955cd90] 2026-03-17 00:02:43.883737 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-03-17 00:02:43.885462 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-03-17 00:02:44.110827 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 0s [id=7407fd68-09c2-4e3e-a8e7-a63674ada115] 2026-03-17 00:02:44.116996 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-03-17 00:02:44.314913 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=14300924-de11-42b9-9da1-c4e0861c0ac3] 2026-03-17 00:02:44.321507 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-03-17 00:02:44.371319 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=d784f2f6-a49f-480a-9fae-ad34d5213cb3] 2026-03-17 00:02:44.378184 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-03-17 00:02:44.379714 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=c5d2a8e4-38d8-4f96-9500-cccd965026dd] 2026-03-17 00:02:44.396439 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-03-17 00:02:44.492499 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=ce891b01-4e51-4f46-9fb8-2c17a3ed1818] 2026-03-17 00:02:44.547597 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=dad5b81f-a441-454a-a744-5258f6a01e9f] 2026-03-17 00:02:44.742668 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 2s [id=ad6fbf03-2a31-4b11-aba0-48e6a367cb43] 2026-03-17 00:02:44.867278 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 2s [id=163fb103-707a-4306-ae8b-2cd495c3ccae] 2026-03-17 00:02:44.874283 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=03c704a3-1236-4c63-a115-2e94a135bba0] 2026-03-17 00:02:45.014907 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=18f92b4f-094a-4e6c-8098-38ce5e7a3bed] 2026-03-17 00:02:45.259011 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=e96c773b-ab56-4bd8-ba9c-26dfd751264a] 2026-03-17 00:02:45.582870 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 2s [id=a3cd0b82-c6d8-4580-a943-226cb7eadde0] 2026-03-17 00:02:45.966186 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 2s [id=28819f32-7950-4a9c-9dbc-df1559a06acc] 2026-03-17 00:02:46.480300 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 3s [id=9d7af7f3-d109-44aa-89b6-e4f0eaea2f79] 2026-03-17 00:02:46.510222 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-03-17 00:02:46.511382 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-03-17 00:02:46.514167 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-03-17 00:02:46.515496 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-03-17 00:02:46.525520 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-03-17 00:02:46.527969 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-03-17 00:02:46.530322 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-03-17 00:02:49.204958 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 2s [id=e128d8a2-6409-4691-bfff-2ebd75478e33] 2026-03-17 00:02:49.212026 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-03-17 00:02:49.218237 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-03-17 00:02:49.219993 | orchestrator | local_file.inventory: Creating... 2026-03-17 00:02:49.224068 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=ec8aa61ce322dc66f756d2d293fc82fa58f2eede] 2026-03-17 00:02:49.225093 | orchestrator | local_file.inventory: Creation complete after 0s [id=d892644010633af981e58b23e0f5b90af4d031d2] 2026-03-17 00:02:51.217734 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 2s [id=e128d8a2-6409-4691-bfff-2ebd75478e33] 2026-03-17 00:02:56.515982 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-03-17 00:02:56.516105 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-03-17 00:02:56.517127 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-03-17 00:02:56.528495 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-03-17 00:02:56.533823 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-03-17 00:02:56.533882 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-03-17 00:03:06.524195 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-03-17 00:03:06.524330 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-03-17 00:03:06.524344 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-03-17 00:03:06.529599 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-03-17 00:03:06.534866 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-03-17 00:03:06.534947 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-03-17 00:03:16.532132 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2026-03-17 00:03:16.532266 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2026-03-17 00:03:16.532286 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2026-03-17 00:03:16.532314 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2026-03-17 00:03:16.535444 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2026-03-17 00:03:16.535479 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2026-03-17 00:03:17.694142 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 31s [id=b548342f-e05d-4f02-8153-17ec6a03d85e] 2026-03-17 00:03:26.539822 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [40s elapsed] 2026-03-17 00:03:26.539920 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [40s elapsed] 2026-03-17 00:03:26.539927 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [40s elapsed] 2026-03-17 00:03:26.539934 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [40s elapsed] 2026-03-17 00:03:26.539961 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [40s elapsed] 2026-03-17 00:03:27.902359 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 41s [id=806049a9-b483-4de3-86e6-4cce23e250c6] 2026-03-17 00:03:36.543279 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [50s elapsed] 2026-03-17 00:03:36.544348 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [50s elapsed] 2026-03-17 00:03:36.544394 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [50s elapsed] 2026-03-17 00:03:36.544402 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [50s elapsed] 2026-03-17 00:03:37.724879 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 51s [id=af1a6725-ebca-42a3-b1bb-9807e03b2ce2] 2026-03-17 00:03:46.550373 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [1m0s elapsed] 2026-03-17 00:03:46.550474 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [1m0s elapsed] 2026-03-17 00:03:46.550482 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [1m0s elapsed] 2026-03-17 00:03:47.529601 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 1m1s [id=8c66ce6a-9c57-4187-82f1-6a6d3768bd68] 2026-03-17 00:03:48.308350 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 1m1s [id=5169845e-04f7-4b79-ac75-26642f891cc4] 2026-03-17 00:03:56.558173 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [1m10s elapsed] 2026-03-17 00:03:58.959824 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 1m12s [id=f4f72c92-0379-4d3a-810f-8c14adfa6ce8] 2026-03-17 00:03:58.999735 | orchestrator | null_resource.node_semaphore: Creating... 2026-03-17 00:03:59.011918 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=272072087860849277] 2026-03-17 00:03:59.013027 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-03-17 00:03:59.020689 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-03-17 00:03:59.035094 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-03-17 00:03:59.050384 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-03-17 00:03:59.059224 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-03-17 00:03:59.062581 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-03-17 00:03:59.071291 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-03-17 00:03:59.076512 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-03-17 00:03:59.083238 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-03-17 00:03:59.118966 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-03-17 00:04:02.566435 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 4s [id=5169845e-04f7-4b79-ac75-26642f891cc4/1afbae95-f964-4c90-9c71-9e7629ff9c63] 2026-03-17 00:04:02.575070 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 4s [id=b548342f-e05d-4f02-8153-17ec6a03d85e/89f9da0d-6b93-4417-9f39-e48f14dc47e8] 2026-03-17 00:04:02.608036 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 4s [id=f4f72c92-0379-4d3a-810f-8c14adfa6ce8/fe0d5661-edac-468e-9d1d-014c3e419a65] 2026-03-17 00:04:02.610575 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 4s [id=5169845e-04f7-4b79-ac75-26642f891cc4/8140ca94-7747-4c81-b89b-0d83b2f23451] 2026-03-17 00:04:02.641692 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 4s [id=f4f72c92-0379-4d3a-810f-8c14adfa6ce8/d33e80f7-c5e3-468e-989c-76b1c28adee9] 2026-03-17 00:04:08.686639 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 10s [id=b548342f-e05d-4f02-8153-17ec6a03d85e/a8e3ed1c-2f99-41d3-ad10-61535a4cd08c] 2026-03-17 00:04:08.711970 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 10s [id=5169845e-04f7-4b79-ac75-26642f891cc4/f65971dd-3d8e-4ccb-8892-9cef1457b08b] 2026-03-17 00:04:08.730473 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 10s [id=b548342f-e05d-4f02-8153-17ec6a03d85e/83f9c1ee-a593-4773-9f19-cdbbc5179b15] 2026-03-17 00:04:08.752994 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 10s [id=f4f72c92-0379-4d3a-810f-8c14adfa6ce8/304f2e06-033e-4696-8bcf-5d7e9425b0ee] 2026-03-17 00:04:09.126496 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-03-17 00:04:19.126749 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-03-17 00:04:19.563660 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=a2ea0a17-7ad1-444b-878b-4b299a319a32] 2026-03-17 00:04:19.827158 | orchestrator | 2026-03-17 00:04:19.827199 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-03-17 00:04:19.827228 | orchestrator | 2026-03-17 00:04:19.827238 | orchestrator | Outputs: 2026-03-17 00:04:19.827246 | orchestrator | 2026-03-17 00:04:19.827270 | orchestrator | manager_address = 2026-03-17 00:04:19.827277 | orchestrator | private_key = 2026-03-17 00:04:20.072524 | orchestrator | ok: Runtime: 0:01:52.642638 2026-03-17 00:04:20.104258 | 2026-03-17 00:04:20.104385 | TASK [Create infrastructure (stable)] 2026-03-17 00:04:20.633883 | orchestrator | skipping: Conditional result was False 2026-03-17 00:04:20.652498 | 2026-03-17 00:04:20.652669 | TASK [Fetch manager address] 2026-03-17 00:04:21.088177 | orchestrator | ok 2026-03-17 00:04:21.097502 | 2026-03-17 00:04:21.097632 | TASK [Set manager_host address] 2026-03-17 00:04:21.176565 | orchestrator | ok 2026-03-17 00:04:21.186527 | 2026-03-17 00:04:21.186675 | LOOP [Update ansible collections] 2026-03-17 00:04:22.193500 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-03-17 00:04:22.193864 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-17 00:04:22.193922 | orchestrator | Starting galaxy collection install process 2026-03-17 00:04:22.193970 | orchestrator | Process install dependency map 2026-03-17 00:04:22.194004 | orchestrator | Starting collection install process 2026-03-17 00:04:22.194034 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed01/.ansible/collections/ansible_collections/osism/commons' 2026-03-17 00:04:22.194068 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed01/.ansible/collections/ansible_collections/osism/commons 2026-03-17 00:04:22.194175 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-03-17 00:04:22.194261 | orchestrator | ok: Item: commons Runtime: 0:00:00.572980 2026-03-17 00:04:23.024368 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-17 00:04:23.024611 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-03-17 00:04:23.024675 | orchestrator | Starting galaxy collection install process 2026-03-17 00:04:23.024719 | orchestrator | Process install dependency map 2026-03-17 00:04:23.024757 | orchestrator | Starting collection install process 2026-03-17 00:04:23.024793 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed01/.ansible/collections/ansible_collections/osism/services' 2026-03-17 00:04:23.024829 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed01/.ansible/collections/ansible_collections/osism/services 2026-03-17 00:04:23.024863 | orchestrator | osism.services:999.0.0 was installed successfully 2026-03-17 00:04:23.024919 | orchestrator | ok: Item: services Runtime: 0:00:00.574577 2026-03-17 00:04:23.047292 | 2026-03-17 00:04:23.047434 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-03-17 00:04:33.603300 | orchestrator | ok 2026-03-17 00:04:33.612676 | 2026-03-17 00:04:33.612797 | TASK [Wait a little longer for the manager so that everything is ready] 2026-03-17 00:05:33.652900 | orchestrator | ok 2026-03-17 00:05:33.663930 | 2026-03-17 00:05:33.664073 | TASK [Fetch manager ssh hostkey] 2026-03-17 00:05:35.246096 | orchestrator | Output suppressed because no_log was given 2026-03-17 00:05:35.260500 | 2026-03-17 00:05:35.260655 | TASK [Get ssh keypair from terraform environment] 2026-03-17 00:05:35.802402 | orchestrator | ok: Runtime: 0:00:00.005300 2026-03-17 00:05:35.820788 | 2026-03-17 00:05:35.820958 | TASK [Point out that the following task takes some time and does not give any output] 2026-03-17 00:05:35.860351 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-03-17 00:05:35.867933 | 2026-03-17 00:05:35.868048 | TASK [Run manager part 0] 2026-03-17 00:05:36.925890 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-17 00:05:36.975410 | orchestrator | 2026-03-17 00:05:36.975455 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-03-17 00:05:36.975463 | orchestrator | 2026-03-17 00:05:36.975477 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-03-17 00:05:38.726437 | orchestrator | ok: [testbed-manager] 2026-03-17 00:05:38.726499 | orchestrator | 2026-03-17 00:05:38.726524 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-03-17 00:05:38.726536 | orchestrator | 2026-03-17 00:05:38.726548 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-17 00:05:40.650075 | orchestrator | ok: [testbed-manager] 2026-03-17 00:05:40.650142 | orchestrator | 2026-03-17 00:05:40.650153 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-03-17 00:05:41.331720 | orchestrator | ok: [testbed-manager] 2026-03-17 00:05:41.331815 | orchestrator | 2026-03-17 00:05:41.331839 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-03-17 00:05:41.368056 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:05:41.368101 | orchestrator | 2026-03-17 00:05:41.368114 | orchestrator | TASK [Update package cache] **************************************************** 2026-03-17 00:05:41.397779 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:05:41.397816 | orchestrator | 2026-03-17 00:05:41.397824 | orchestrator | TASK [Install required packages] *********************************************** 2026-03-17 00:05:41.427676 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:05:41.427735 | orchestrator | 2026-03-17 00:05:41.427748 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-03-17 00:05:41.455797 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:05:41.455848 | orchestrator | 2026-03-17 00:05:41.455859 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-03-17 00:05:41.482075 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:05:41.482121 | orchestrator | 2026-03-17 00:05:41.482129 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-03-17 00:05:41.506269 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:05:41.506299 | orchestrator | 2026-03-17 00:05:41.506306 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-03-17 00:05:41.534191 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:05:41.534225 | orchestrator | 2026-03-17 00:05:41.534233 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-03-17 00:05:42.229823 | orchestrator | changed: [testbed-manager] 2026-03-17 00:05:42.229922 | orchestrator | 2026-03-17 00:05:42.229928 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-03-17 00:08:40.140622 | orchestrator | changed: [testbed-manager] 2026-03-17 00:08:40.140694 | orchestrator | 2026-03-17 00:08:40.140712 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-03-17 00:10:01.739394 | orchestrator | changed: [testbed-manager] 2026-03-17 00:10:01.739812 | orchestrator | 2026-03-17 00:10:01.739827 | orchestrator | TASK [Install required packages] *********************************************** 2026-03-17 00:10:26.432750 | orchestrator | changed: [testbed-manager] 2026-03-17 00:10:26.432802 | orchestrator | 2026-03-17 00:10:26.432815 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-03-17 00:10:35.070779 | orchestrator | changed: [testbed-manager] 2026-03-17 00:10:35.070870 | orchestrator | 2026-03-17 00:10:35.070886 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-03-17 00:10:35.124652 | orchestrator | ok: [testbed-manager] 2026-03-17 00:10:35.124726 | orchestrator | 2026-03-17 00:10:35.124740 | orchestrator | TASK [Get current user] ******************************************************** 2026-03-17 00:10:38.111036 | orchestrator | ok: [testbed-manager] 2026-03-17 00:10:38.111132 | orchestrator | 2026-03-17 00:10:38.111150 | orchestrator | TASK [Create venv directory] *************************************************** 2026-03-17 00:10:38.822579 | orchestrator | changed: [testbed-manager] 2026-03-17 00:10:38.822698 | orchestrator | 2026-03-17 00:10:38.822718 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-03-17 00:10:45.059506 | orchestrator | changed: [testbed-manager] 2026-03-17 00:10:45.059577 | orchestrator | 2026-03-17 00:10:45.059646 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-03-17 00:10:50.823053 | orchestrator | changed: [testbed-manager] 2026-03-17 00:10:50.823160 | orchestrator | 2026-03-17 00:10:50.823186 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-03-17 00:10:53.280076 | orchestrator | changed: [testbed-manager] 2026-03-17 00:10:53.280143 | orchestrator | 2026-03-17 00:10:53.280156 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-03-17 00:10:54.866103 | orchestrator | changed: [testbed-manager] 2026-03-17 00:10:54.866188 | orchestrator | 2026-03-17 00:10:54.866206 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-03-17 00:10:55.925608 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-03-17 00:10:55.925710 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-03-17 00:10:55.925734 | orchestrator | 2026-03-17 00:10:55.925755 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-03-17 00:10:55.969151 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-03-17 00:10:55.969227 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-03-17 00:10:55.969241 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-03-17 00:10:55.969254 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-03-17 00:11:04.140021 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-03-17 00:11:04.140114 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-03-17 00:11:04.140129 | orchestrator | 2026-03-17 00:11:04.140143 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-03-17 00:11:04.695231 | orchestrator | changed: [testbed-manager] 2026-03-17 00:11:04.695274 | orchestrator | 2026-03-17 00:11:04.695282 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-03-17 00:14:26.230685 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-03-17 00:14:26.230756 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-03-17 00:14:26.230769 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-03-17 00:14:26.230780 | orchestrator | 2026-03-17 00:14:26.230790 | orchestrator | TASK [Install local collections] *********************************************** 2026-03-17 00:14:28.530667 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-03-17 00:14:28.530752 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-03-17 00:14:28.530767 | orchestrator | 2026-03-17 00:14:28.530780 | orchestrator | PLAY [Create operator user] **************************************************** 2026-03-17 00:14:28.530793 | orchestrator | 2026-03-17 00:14:28.530804 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-17 00:14:29.915881 | orchestrator | ok: [testbed-manager] 2026-03-17 00:14:29.915915 | orchestrator | 2026-03-17 00:14:29.915922 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-03-17 00:14:29.961482 | orchestrator | ok: [testbed-manager] 2026-03-17 00:14:29.961517 | orchestrator | 2026-03-17 00:14:29.961523 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-03-17 00:14:30.027445 | orchestrator | ok: [testbed-manager] 2026-03-17 00:14:30.027484 | orchestrator | 2026-03-17 00:14:30.027491 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-03-17 00:14:30.821939 | orchestrator | changed: [testbed-manager] 2026-03-17 00:14:30.822167 | orchestrator | 2026-03-17 00:14:30.822186 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-03-17 00:14:31.545003 | orchestrator | changed: [testbed-manager] 2026-03-17 00:14:31.545049 | orchestrator | 2026-03-17 00:14:31.545058 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-03-17 00:14:32.900193 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-03-17 00:14:32.900316 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-03-17 00:14:32.900332 | orchestrator | 2026-03-17 00:14:32.900365 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-03-17 00:14:34.216736 | orchestrator | changed: [testbed-manager] 2026-03-17 00:14:34.216887 | orchestrator | 2026-03-17 00:14:34.216896 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-03-17 00:14:35.962403 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-03-17 00:14:35.962461 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-03-17 00:14:35.962470 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-03-17 00:14:35.962477 | orchestrator | 2026-03-17 00:14:35.962485 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-03-17 00:14:36.020806 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:14:36.020858 | orchestrator | 2026-03-17 00:14:36.020865 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-03-17 00:14:36.082583 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:14:36.082626 | orchestrator | 2026-03-17 00:14:36.082635 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-03-17 00:14:36.642138 | orchestrator | changed: [testbed-manager] 2026-03-17 00:14:36.642183 | orchestrator | 2026-03-17 00:14:36.642192 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-03-17 00:14:36.715763 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:14:36.715809 | orchestrator | 2026-03-17 00:14:36.715818 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-03-17 00:14:37.560371 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-17 00:14:37.560412 | orchestrator | changed: [testbed-manager] 2026-03-17 00:14:37.560422 | orchestrator | 2026-03-17 00:14:37.560429 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-03-17 00:14:37.596238 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:14:37.596307 | orchestrator | 2026-03-17 00:14:37.596315 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-03-17 00:14:37.631596 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:14:37.631666 | orchestrator | 2026-03-17 00:14:37.631678 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-03-17 00:14:37.669747 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:14:37.669798 | orchestrator | 2026-03-17 00:14:37.669810 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-03-17 00:14:37.752659 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:14:37.752728 | orchestrator | 2026-03-17 00:14:37.752785 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-03-17 00:14:38.448831 | orchestrator | ok: [testbed-manager] 2026-03-17 00:14:38.448869 | orchestrator | 2026-03-17 00:14:38.448875 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-03-17 00:14:38.448880 | orchestrator | 2026-03-17 00:14:38.448884 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-17 00:14:39.801963 | orchestrator | ok: [testbed-manager] 2026-03-17 00:14:39.802109 | orchestrator | 2026-03-17 00:14:39.802127 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-03-17 00:14:40.741492 | orchestrator | changed: [testbed-manager] 2026-03-17 00:14:40.741713 | orchestrator | 2026-03-17 00:14:40.741738 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:14:40.741751 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=14 rescued=0 ignored=0 2026-03-17 00:14:40.741763 | orchestrator | 2026-03-17 00:14:41.208587 | orchestrator | ok: Runtime: 0:09:04.586603 2026-03-17 00:14:41.227872 | 2026-03-17 00:14:41.228127 | TASK [Point out that the log in on the manager is now possible] 2026-03-17 00:14:41.277202 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-03-17 00:14:41.287600 | 2026-03-17 00:14:41.287734 | TASK [Point out that the following task takes some time and does not give any output] 2026-03-17 00:14:41.324104 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-03-17 00:14:41.333236 | 2026-03-17 00:14:41.333360 | TASK [Run manager part 1 + 2] 2026-03-17 00:14:43.182092 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-17 00:14:43.353750 | orchestrator | 2026-03-17 00:14:43.353865 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-03-17 00:14:43.353886 | orchestrator | 2026-03-17 00:14:43.353918 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-17 00:14:46.285311 | orchestrator | ok: [testbed-manager] 2026-03-17 00:14:46.285410 | orchestrator | 2026-03-17 00:14:46.285468 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-03-17 00:14:46.332004 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:14:46.332071 | orchestrator | 2026-03-17 00:14:46.332084 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-03-17 00:14:46.378726 | orchestrator | ok: [testbed-manager] 2026-03-17 00:14:46.378820 | orchestrator | 2026-03-17 00:14:46.378840 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-17 00:14:46.429653 | orchestrator | ok: [testbed-manager] 2026-03-17 00:14:46.429711 | orchestrator | 2026-03-17 00:14:46.429724 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-17 00:14:46.499393 | orchestrator | ok: [testbed-manager] 2026-03-17 00:14:46.499447 | orchestrator | 2026-03-17 00:14:46.499454 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-17 00:14:46.565548 | orchestrator | ok: [testbed-manager] 2026-03-17 00:14:46.565623 | orchestrator | 2026-03-17 00:14:46.565642 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-17 00:14:46.630357 | orchestrator | included: /home/zuul-testbed01/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-03-17 00:14:46.630412 | orchestrator | 2026-03-17 00:14:46.630419 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-17 00:14:47.356957 | orchestrator | ok: [testbed-manager] 2026-03-17 00:14:47.357012 | orchestrator | 2026-03-17 00:14:47.357022 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-17 00:14:47.410879 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:14:47.410931 | orchestrator | 2026-03-17 00:14:47.410939 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-17 00:14:48.847963 | orchestrator | changed: [testbed-manager] 2026-03-17 00:14:48.848012 | orchestrator | 2026-03-17 00:14:48.848020 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-17 00:14:49.370792 | orchestrator | ok: [testbed-manager] 2026-03-17 00:14:49.370843 | orchestrator | 2026-03-17 00:14:49.370850 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-17 00:14:50.438377 | orchestrator | changed: [testbed-manager] 2026-03-17 00:14:50.438438 | orchestrator | 2026-03-17 00:14:50.438455 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-17 00:15:04.579166 | orchestrator | changed: [testbed-manager] 2026-03-17 00:15:04.579293 | orchestrator | 2026-03-17 00:15:04.579312 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-03-17 00:15:05.303455 | orchestrator | ok: [testbed-manager] 2026-03-17 00:15:05.303499 | orchestrator | 2026-03-17 00:15:05.303509 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-03-17 00:15:05.358478 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:15:05.358563 | orchestrator | 2026-03-17 00:15:05.358580 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-03-17 00:15:06.245649 | orchestrator | changed: [testbed-manager] 2026-03-17 00:15:06.245742 | orchestrator | 2026-03-17 00:15:06.245768 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-03-17 00:15:07.121134 | orchestrator | changed: [testbed-manager] 2026-03-17 00:15:07.121217 | orchestrator | 2026-03-17 00:15:07.121257 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-03-17 00:15:07.645047 | orchestrator | changed: [testbed-manager] 2026-03-17 00:15:07.645122 | orchestrator | 2026-03-17 00:15:07.645137 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-03-17 00:15:07.679778 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-03-17 00:15:07.679878 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-03-17 00:15:07.679895 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-03-17 00:15:07.679908 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-03-17 00:15:09.559198 | orchestrator | changed: [testbed-manager] 2026-03-17 00:15:09.559300 | orchestrator | 2026-03-17 00:15:09.559319 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-03-17 00:15:18.174445 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-03-17 00:15:18.174488 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-03-17 00:15:18.174496 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-03-17 00:15:18.174503 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-03-17 00:15:18.174513 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-03-17 00:15:18.174519 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-03-17 00:15:18.174525 | orchestrator | 2026-03-17 00:15:18.174532 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-03-17 00:15:19.240030 | orchestrator | changed: [testbed-manager] 2026-03-17 00:15:19.240119 | orchestrator | 2026-03-17 00:15:19.240137 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2026-03-17 00:15:19.286371 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:15:19.286441 | orchestrator | 2026-03-17 00:15:19.286451 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-03-17 00:15:22.298177 | orchestrator | changed: [testbed-manager] 2026-03-17 00:15:22.298216 | orchestrator | 2026-03-17 00:15:22.298223 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-03-17 00:15:22.341981 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:15:22.342040 | orchestrator | 2026-03-17 00:15:22.342049 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-03-17 00:16:54.958986 | orchestrator | changed: [testbed-manager] 2026-03-17 00:16:54.959094 | orchestrator | 2026-03-17 00:16:54.959114 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-17 00:16:56.125897 | orchestrator | ok: [testbed-manager] 2026-03-17 00:16:56.125982 | orchestrator | 2026-03-17 00:16:56.126001 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:16:56.126047 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2026-03-17 00:16:56.126064 | orchestrator | 2026-03-17 00:16:56.458961 | orchestrator | ok: Runtime: 0:02:14.565493 2026-03-17 00:16:56.476855 | 2026-03-17 00:16:56.477019 | TASK [Reboot manager] 2026-03-17 00:16:58.013915 | orchestrator | ok: Runtime: 0:00:00.954650 2026-03-17 00:16:58.031266 | 2026-03-17 00:16:58.031423 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-03-17 00:17:12.139446 | orchestrator | ok 2026-03-17 00:17:12.150379 | 2026-03-17 00:17:12.150535 | TASK [Wait a little longer for the manager so that everything is ready] 2026-03-17 00:18:12.206522 | orchestrator | ok 2026-03-17 00:18:12.220041 | 2026-03-17 00:18:12.220206 | TASK [Deploy manager + bootstrap nodes] 2026-03-17 00:18:14.742373 | orchestrator | 2026-03-17 00:18:14.742572 | orchestrator | # DEPLOY MANAGER 2026-03-17 00:18:14.742594 | orchestrator | 2026-03-17 00:18:14.742606 | orchestrator | + set -e 2026-03-17 00:18:14.742616 | orchestrator | + echo 2026-03-17 00:18:14.742626 | orchestrator | + echo '# DEPLOY MANAGER' 2026-03-17 00:18:14.742640 | orchestrator | + echo 2026-03-17 00:18:14.742677 | orchestrator | + cat /opt/manager-vars.sh 2026-03-17 00:18:14.745514 | orchestrator | export NUMBER_OF_NODES=6 2026-03-17 00:18:14.745586 | orchestrator | 2026-03-17 00:18:14.745604 | orchestrator | export CEPH_VERSION=reef 2026-03-17 00:18:14.745619 | orchestrator | export CONFIGURATION_VERSION=main 2026-03-17 00:18:14.745629 | orchestrator | export MANAGER_VERSION=latest 2026-03-17 00:18:14.745649 | orchestrator | export OPENSTACK_VERSION=2024.2 2026-03-17 00:18:14.745656 | orchestrator | 2026-03-17 00:18:14.745668 | orchestrator | export ARA=false 2026-03-17 00:18:14.745676 | orchestrator | export DEPLOY_MODE=manager 2026-03-17 00:18:14.745687 | orchestrator | export TEMPEST=true 2026-03-17 00:18:14.745694 | orchestrator | export IS_ZUUL=true 2026-03-17 00:18:14.745701 | orchestrator | 2026-03-17 00:18:14.745712 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.53 2026-03-17 00:18:14.745720 | orchestrator | export EXTERNAL_API=false 2026-03-17 00:18:14.745727 | orchestrator | 2026-03-17 00:18:14.745733 | orchestrator | export IMAGE_USER=ubuntu 2026-03-17 00:18:14.745743 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-03-17 00:18:14.745750 | orchestrator | 2026-03-17 00:18:14.745757 | orchestrator | export CEPH_STACK=ceph-ansible 2026-03-17 00:18:14.745772 | orchestrator | 2026-03-17 00:18:14.745779 | orchestrator | + echo 2026-03-17 00:18:14.745787 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-17 00:18:14.746723 | orchestrator | ++ export INTERACTIVE=false 2026-03-17 00:18:14.746760 | orchestrator | ++ INTERACTIVE=false 2026-03-17 00:18:14.746771 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-17 00:18:14.746780 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-17 00:18:14.746788 | orchestrator | + source /opt/manager-vars.sh 2026-03-17 00:18:14.746796 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-17 00:18:14.746804 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-17 00:18:14.746812 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-17 00:18:14.746825 | orchestrator | ++ CEPH_VERSION=reef 2026-03-17 00:18:14.746833 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-17 00:18:14.746840 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-17 00:18:14.746847 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-17 00:18:14.746854 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-17 00:18:14.746861 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-17 00:18:14.746877 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-17 00:18:14.746884 | orchestrator | ++ export ARA=false 2026-03-17 00:18:14.746891 | orchestrator | ++ ARA=false 2026-03-17 00:18:14.746898 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-17 00:18:14.746905 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-17 00:18:14.746911 | orchestrator | ++ export TEMPEST=true 2026-03-17 00:18:14.746918 | orchestrator | ++ TEMPEST=true 2026-03-17 00:18:14.746925 | orchestrator | ++ export IS_ZUUL=true 2026-03-17 00:18:14.746934 | orchestrator | ++ IS_ZUUL=true 2026-03-17 00:18:14.746942 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.53 2026-03-17 00:18:14.746948 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.53 2026-03-17 00:18:14.746955 | orchestrator | ++ export EXTERNAL_API=false 2026-03-17 00:18:14.746962 | orchestrator | ++ EXTERNAL_API=false 2026-03-17 00:18:14.746968 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-17 00:18:14.747113 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-17 00:18:14.747124 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-17 00:18:14.747131 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-17 00:18:14.747138 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-17 00:18:14.747145 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-17 00:18:14.747151 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-03-17 00:18:14.804738 | orchestrator | + docker version 2026-03-17 00:18:14.902149 | orchestrator | Client: Docker Engine - Community 2026-03-17 00:18:14.902292 | orchestrator | Version: 27.5.1 2026-03-17 00:18:14.902308 | orchestrator | API version: 1.47 2026-03-17 00:18:14.902322 | orchestrator | Go version: go1.22.11 2026-03-17 00:18:14.902334 | orchestrator | Git commit: 9f9e405 2026-03-17 00:18:14.902345 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-03-17 00:18:14.902357 | orchestrator | OS/Arch: linux/amd64 2026-03-17 00:18:14.902369 | orchestrator | Context: default 2026-03-17 00:18:14.902380 | orchestrator | 2026-03-17 00:18:14.902391 | orchestrator | Server: Docker Engine - Community 2026-03-17 00:18:14.902402 | orchestrator | Engine: 2026-03-17 00:18:14.902413 | orchestrator | Version: 27.5.1 2026-03-17 00:18:14.902425 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-03-17 00:18:14.902465 | orchestrator | Go version: go1.22.11 2026-03-17 00:18:14.902476 | orchestrator | Git commit: 4c9b3b0 2026-03-17 00:18:14.902487 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-03-17 00:18:14.902498 | orchestrator | OS/Arch: linux/amd64 2026-03-17 00:18:14.902509 | orchestrator | Experimental: false 2026-03-17 00:18:14.902520 | orchestrator | containerd: 2026-03-17 00:18:14.902531 | orchestrator | Version: v2.2.2 2026-03-17 00:18:14.902543 | orchestrator | GitCommit: 301b2dac98f15c27117da5c8af12118a041a31d9 2026-03-17 00:18:14.902555 | orchestrator | runc: 2026-03-17 00:18:14.902565 | orchestrator | Version: 1.3.4 2026-03-17 00:18:14.902577 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-03-17 00:18:14.902588 | orchestrator | docker-init: 2026-03-17 00:18:14.902598 | orchestrator | Version: 0.19.0 2026-03-17 00:18:14.902610 | orchestrator | GitCommit: de40ad0 2026-03-17 00:18:14.904371 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-03-17 00:18:14.913261 | orchestrator | + set -e 2026-03-17 00:18:14.913318 | orchestrator | + source /opt/manager-vars.sh 2026-03-17 00:18:14.913332 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-17 00:18:14.913344 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-17 00:18:14.913355 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-17 00:18:14.913367 | orchestrator | ++ CEPH_VERSION=reef 2026-03-17 00:18:14.913378 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-17 00:18:14.913389 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-17 00:18:14.913401 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-17 00:18:14.913412 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-17 00:18:14.913423 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-17 00:18:14.913434 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-17 00:18:14.913445 | orchestrator | ++ export ARA=false 2026-03-17 00:18:14.913456 | orchestrator | ++ ARA=false 2026-03-17 00:18:14.913467 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-17 00:18:14.913478 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-17 00:18:14.913489 | orchestrator | ++ export TEMPEST=true 2026-03-17 00:18:14.913500 | orchestrator | ++ TEMPEST=true 2026-03-17 00:18:14.913511 | orchestrator | ++ export IS_ZUUL=true 2026-03-17 00:18:14.913522 | orchestrator | ++ IS_ZUUL=true 2026-03-17 00:18:14.913533 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.53 2026-03-17 00:18:14.913544 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.53 2026-03-17 00:18:14.913555 | orchestrator | ++ export EXTERNAL_API=false 2026-03-17 00:18:14.913566 | orchestrator | ++ EXTERNAL_API=false 2026-03-17 00:18:14.913577 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-17 00:18:14.913588 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-17 00:18:14.913599 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-17 00:18:14.913609 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-17 00:18:14.913621 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-17 00:18:14.913632 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-17 00:18:14.913643 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-17 00:18:14.913654 | orchestrator | ++ export INTERACTIVE=false 2026-03-17 00:18:14.913665 | orchestrator | ++ INTERACTIVE=false 2026-03-17 00:18:14.913675 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-17 00:18:14.913690 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-17 00:18:14.913709 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-03-17 00:18:14.913720 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-17 00:18:14.913731 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2026-03-17 00:18:14.920602 | orchestrator | + set -e 2026-03-17 00:18:14.920661 | orchestrator | + VERSION=reef 2026-03-17 00:18:14.921685 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2026-03-17 00:18:14.927744 | orchestrator | + [[ -n ceph_version: reef ]] 2026-03-17 00:18:14.927807 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2026-03-17 00:18:14.932468 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2026-03-17 00:18:14.938691 | orchestrator | + set -e 2026-03-17 00:18:14.939260 | orchestrator | + VERSION=2024.2 2026-03-17 00:18:14.939612 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2026-03-17 00:18:14.943682 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2026-03-17 00:18:14.943754 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2026-03-17 00:18:14.948063 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-03-17 00:18:14.948746 | orchestrator | ++ semver latest 7.0.0 2026-03-17 00:18:15.008062 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-17 00:18:15.008181 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-17 00:18:15.008198 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-03-17 00:18:15.008956 | orchestrator | ++ semver latest 10.0.0-0 2026-03-17 00:18:15.067248 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-17 00:18:15.067938 | orchestrator | ++ semver 2024.2 2025.1 2026-03-17 00:18:15.121823 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-17 00:18:15.121911 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-03-17 00:18:15.216044 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-17 00:18:15.218531 | orchestrator | + source /opt/venv/bin/activate 2026-03-17 00:18:15.219568 | orchestrator | ++ deactivate nondestructive 2026-03-17 00:18:15.219588 | orchestrator | ++ '[' -n '' ']' 2026-03-17 00:18:15.219602 | orchestrator | ++ '[' -n '' ']' 2026-03-17 00:18:15.219619 | orchestrator | ++ hash -r 2026-03-17 00:18:15.219631 | orchestrator | ++ '[' -n '' ']' 2026-03-17 00:18:15.219643 | orchestrator | ++ unset VIRTUAL_ENV 2026-03-17 00:18:15.219754 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-03-17 00:18:15.219772 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-03-17 00:18:15.219984 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-03-17 00:18:15.219999 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-03-17 00:18:15.220011 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-03-17 00:18:15.220022 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-03-17 00:18:15.220038 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-17 00:18:15.220051 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-17 00:18:15.220062 | orchestrator | ++ export PATH 2026-03-17 00:18:15.220140 | orchestrator | ++ '[' -n '' ']' 2026-03-17 00:18:15.220179 | orchestrator | ++ '[' -z '' ']' 2026-03-17 00:18:15.220195 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-03-17 00:18:15.220206 | orchestrator | ++ PS1='(venv) ' 2026-03-17 00:18:15.220220 | orchestrator | ++ export PS1 2026-03-17 00:18:15.220232 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-03-17 00:18:15.220352 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-03-17 00:18:15.220368 | orchestrator | ++ hash -r 2026-03-17 00:18:15.220599 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-03-17 00:18:16.306448 | orchestrator | 2026-03-17 00:18:16.306563 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-03-17 00:18:16.306581 | orchestrator | 2026-03-17 00:18:16.306593 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-17 00:18:16.842180 | orchestrator | ok: [testbed-manager] 2026-03-17 00:18:16.842303 | orchestrator | 2026-03-17 00:18:16.842330 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-03-17 00:18:17.786720 | orchestrator | changed: [testbed-manager] 2026-03-17 00:18:17.947665 | orchestrator | 2026-03-17 00:18:17.947737 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-03-17 00:18:17.947752 | orchestrator | 2026-03-17 00:18:17.947763 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-17 00:18:20.110276 | orchestrator | ok: [testbed-manager] 2026-03-17 00:18:20.110374 | orchestrator | 2026-03-17 00:18:20.110393 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-03-17 00:18:20.161927 | orchestrator | ok: [testbed-manager] 2026-03-17 00:18:20.162005 | orchestrator | 2026-03-17 00:18:20.162061 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-03-17 00:18:20.612048 | orchestrator | changed: [testbed-manager] 2026-03-17 00:18:20.612142 | orchestrator | 2026-03-17 00:18:20.612186 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-03-17 00:18:20.645148 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:18:20.645253 | orchestrator | 2026-03-17 00:18:20.645271 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-03-17 00:18:20.952744 | orchestrator | changed: [testbed-manager] 2026-03-17 00:18:20.952847 | orchestrator | 2026-03-17 00:18:20.952867 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-03-17 00:18:21.290644 | orchestrator | ok: [testbed-manager] 2026-03-17 00:18:21.290739 | orchestrator | 2026-03-17 00:18:21.290755 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-03-17 00:18:21.400766 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:18:21.400850 | orchestrator | 2026-03-17 00:18:21.400864 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-03-17 00:18:21.400877 | orchestrator | 2026-03-17 00:18:21.400888 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-17 00:18:23.138530 | orchestrator | ok: [testbed-manager] 2026-03-17 00:18:23.138588 | orchestrator | 2026-03-17 00:18:23.138601 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-03-17 00:18:23.238770 | orchestrator | included: osism.services.traefik for testbed-manager 2026-03-17 00:18:23.238893 | orchestrator | 2026-03-17 00:18:23.238918 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-03-17 00:18:23.305139 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-03-17 00:18:23.305273 | orchestrator | 2026-03-17 00:18:23.305290 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-03-17 00:18:24.399615 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-03-17 00:18:24.399707 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-03-17 00:18:24.399721 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-03-17 00:18:24.399733 | orchestrator | 2026-03-17 00:18:24.399745 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-03-17 00:18:26.212677 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-03-17 00:18:26.212768 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-03-17 00:18:26.212781 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-03-17 00:18:26.212792 | orchestrator | 2026-03-17 00:18:26.212805 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-03-17 00:18:26.820199 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-17 00:18:26.820302 | orchestrator | changed: [testbed-manager] 2026-03-17 00:18:26.820322 | orchestrator | 2026-03-17 00:18:26.820342 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-03-17 00:18:27.426896 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-17 00:18:27.427000 | orchestrator | changed: [testbed-manager] 2026-03-17 00:18:27.427019 | orchestrator | 2026-03-17 00:18:27.427033 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-03-17 00:18:27.481535 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:18:27.481623 | orchestrator | 2026-03-17 00:18:27.481641 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-03-17 00:18:27.853683 | orchestrator | ok: [testbed-manager] 2026-03-17 00:18:27.853778 | orchestrator | 2026-03-17 00:18:27.853795 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-03-17 00:18:27.920922 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-03-17 00:18:27.921044 | orchestrator | 2026-03-17 00:18:27.921067 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-03-17 00:18:29.046340 | orchestrator | changed: [testbed-manager] 2026-03-17 00:18:29.046440 | orchestrator | 2026-03-17 00:18:29.046458 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-03-17 00:18:29.828654 | orchestrator | changed: [testbed-manager] 2026-03-17 00:18:29.828762 | orchestrator | 2026-03-17 00:18:29.828780 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-03-17 00:18:49.142618 | orchestrator | changed: [testbed-manager] 2026-03-17 00:18:49.142718 | orchestrator | 2026-03-17 00:18:49.142754 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-03-17 00:18:49.201806 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:18:49.201888 | orchestrator | 2026-03-17 00:18:49.201902 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-03-17 00:18:49.201914 | orchestrator | 2026-03-17 00:18:49.201924 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-17 00:18:51.046791 | orchestrator | ok: [testbed-manager] 2026-03-17 00:18:51.046871 | orchestrator | 2026-03-17 00:18:51.046919 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-03-17 00:18:51.151573 | orchestrator | included: osism.services.manager for testbed-manager 2026-03-17 00:18:51.151650 | orchestrator | 2026-03-17 00:18:51.151661 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-03-17 00:18:51.207651 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-03-17 00:18:51.207714 | orchestrator | 2026-03-17 00:18:51.207720 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-03-17 00:18:53.588542 | orchestrator | ok: [testbed-manager] 2026-03-17 00:18:53.588679 | orchestrator | 2026-03-17 00:18:53.588707 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-03-17 00:18:53.647018 | orchestrator | ok: [testbed-manager] 2026-03-17 00:18:53.647105 | orchestrator | 2026-03-17 00:18:53.647120 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-03-17 00:18:53.770768 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-03-17 00:18:53.770849 | orchestrator | 2026-03-17 00:18:53.770864 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-03-17 00:18:56.530299 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-03-17 00:18:56.530413 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-03-17 00:18:56.530429 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-03-17 00:18:56.530442 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-03-17 00:18:56.530453 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-03-17 00:18:56.530465 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-03-17 00:18:56.530476 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-03-17 00:18:56.530487 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-03-17 00:18:56.530498 | orchestrator | 2026-03-17 00:18:56.530511 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-03-17 00:18:57.132312 | orchestrator | changed: [testbed-manager] 2026-03-17 00:18:57.132409 | orchestrator | 2026-03-17 00:18:57.132426 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-03-17 00:18:57.766575 | orchestrator | changed: [testbed-manager] 2026-03-17 00:18:57.766674 | orchestrator | 2026-03-17 00:18:57.766690 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-03-17 00:18:57.838813 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-03-17 00:18:57.838944 | orchestrator | 2026-03-17 00:18:57.838959 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-03-17 00:18:59.031179 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-03-17 00:18:59.031263 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-03-17 00:18:59.031276 | orchestrator | 2026-03-17 00:18:59.031287 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-03-17 00:18:59.612618 | orchestrator | changed: [testbed-manager] 2026-03-17 00:18:59.612713 | orchestrator | 2026-03-17 00:18:59.612731 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-03-17 00:18:59.661766 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:18:59.661860 | orchestrator | 2026-03-17 00:18:59.661875 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-03-17 00:18:59.742476 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-03-17 00:18:59.742600 | orchestrator | 2026-03-17 00:18:59.742627 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-03-17 00:19:00.322594 | orchestrator | changed: [testbed-manager] 2026-03-17 00:19:00.322684 | orchestrator | 2026-03-17 00:19:00.322700 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-03-17 00:19:00.382441 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-03-17 00:19:00.382541 | orchestrator | 2026-03-17 00:19:00.382551 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-03-17 00:19:01.692726 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-17 00:19:01.692820 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-17 00:19:01.692835 | orchestrator | changed: [testbed-manager] 2026-03-17 00:19:01.692855 | orchestrator | 2026-03-17 00:19:01.692876 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-03-17 00:19:02.315220 | orchestrator | changed: [testbed-manager] 2026-03-17 00:19:02.315296 | orchestrator | 2026-03-17 00:19:02.315305 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-03-17 00:19:02.373315 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:19:02.373396 | orchestrator | 2026-03-17 00:19:02.373412 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-03-17 00:19:02.462604 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-03-17 00:19:02.462718 | orchestrator | 2026-03-17 00:19:02.462734 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-03-17 00:19:02.954488 | orchestrator | changed: [testbed-manager] 2026-03-17 00:19:02.954588 | orchestrator | 2026-03-17 00:19:02.954628 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-03-17 00:19:03.336826 | orchestrator | changed: [testbed-manager] 2026-03-17 00:19:03.336910 | orchestrator | 2026-03-17 00:19:03.336924 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-03-17 00:19:04.531450 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-03-17 00:19:04.531544 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-03-17 00:19:04.531559 | orchestrator | 2026-03-17 00:19:04.531572 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-03-17 00:19:05.158826 | orchestrator | changed: [testbed-manager] 2026-03-17 00:19:05.158892 | orchestrator | 2026-03-17 00:19:05.158907 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-03-17 00:19:05.518122 | orchestrator | ok: [testbed-manager] 2026-03-17 00:19:05.518234 | orchestrator | 2026-03-17 00:19:05.518248 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-03-17 00:19:05.875605 | orchestrator | changed: [testbed-manager] 2026-03-17 00:19:05.875701 | orchestrator | 2026-03-17 00:19:05.875717 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-03-17 00:19:05.926835 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:19:05.926917 | orchestrator | 2026-03-17 00:19:05.926933 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-03-17 00:19:05.991528 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-03-17 00:19:05.991620 | orchestrator | 2026-03-17 00:19:05.991636 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-03-17 00:19:06.034788 | orchestrator | ok: [testbed-manager] 2026-03-17 00:19:06.034857 | orchestrator | 2026-03-17 00:19:06.034866 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-03-17 00:19:07.965555 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-03-17 00:19:07.965663 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-03-17 00:19:07.965679 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-03-17 00:19:07.965692 | orchestrator | 2026-03-17 00:19:07.965704 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-03-17 00:19:08.653080 | orchestrator | changed: [testbed-manager] 2026-03-17 00:19:08.653209 | orchestrator | 2026-03-17 00:19:08.653227 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-03-17 00:19:09.350336 | orchestrator | changed: [testbed-manager] 2026-03-17 00:19:09.350419 | orchestrator | 2026-03-17 00:19:09.350434 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-03-17 00:19:10.054289 | orchestrator | changed: [testbed-manager] 2026-03-17 00:19:10.054377 | orchestrator | 2026-03-17 00:19:10.054395 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-03-17 00:19:10.132679 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-03-17 00:19:10.132761 | orchestrator | 2026-03-17 00:19:10.132776 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-03-17 00:19:10.177870 | orchestrator | ok: [testbed-manager] 2026-03-17 00:19:10.177960 | orchestrator | 2026-03-17 00:19:10.177977 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-03-17 00:19:10.868254 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-03-17 00:19:10.868346 | orchestrator | 2026-03-17 00:19:10.868361 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-03-17 00:19:10.945298 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-03-17 00:19:10.945384 | orchestrator | 2026-03-17 00:19:10.945399 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-03-17 00:19:11.632555 | orchestrator | changed: [testbed-manager] 2026-03-17 00:19:11.632645 | orchestrator | 2026-03-17 00:19:11.632660 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-03-17 00:19:12.239429 | orchestrator | ok: [testbed-manager] 2026-03-17 00:19:12.239519 | orchestrator | 2026-03-17 00:19:12.239536 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-03-17 00:19:12.298205 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:19:12.298287 | orchestrator | 2026-03-17 00:19:12.298308 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-03-17 00:19:12.353983 | orchestrator | ok: [testbed-manager] 2026-03-17 00:19:12.354115 | orchestrator | 2026-03-17 00:19:12.354140 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-03-17 00:19:13.173374 | orchestrator | changed: [testbed-manager] 2026-03-17 00:19:13.173496 | orchestrator | 2026-03-17 00:19:13.173524 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-03-17 00:20:31.386464 | orchestrator | changed: [testbed-manager] 2026-03-17 00:20:31.386618 | orchestrator | 2026-03-17 00:20:31.386638 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-03-17 00:20:32.430048 | orchestrator | ok: [testbed-manager] 2026-03-17 00:20:32.430169 | orchestrator | 2026-03-17 00:20:32.430183 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-03-17 00:20:32.485556 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:20:32.485654 | orchestrator | 2026-03-17 00:20:32.485671 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-03-17 00:20:37.298285 | orchestrator | changed: [testbed-manager] 2026-03-17 00:20:37.298393 | orchestrator | 2026-03-17 00:20:37.298411 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-03-17 00:20:37.389522 | orchestrator | ok: [testbed-manager] 2026-03-17 00:20:37.389631 | orchestrator | 2026-03-17 00:20:37.389681 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-03-17 00:20:37.389701 | orchestrator | 2026-03-17 00:20:37.389719 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-03-17 00:20:37.436851 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:20:37.436961 | orchestrator | 2026-03-17 00:20:37.436978 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-03-17 00:21:37.489728 | orchestrator | Pausing for 60 seconds 2026-03-17 00:21:37.489866 | orchestrator | changed: [testbed-manager] 2026-03-17 00:21:37.489883 | orchestrator | 2026-03-17 00:21:37.489897 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-03-17 00:21:40.590585 | orchestrator | changed: [testbed-manager] 2026-03-17 00:21:40.590680 | orchestrator | 2026-03-17 00:21:40.590697 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-03-17 00:22:22.015880 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-03-17 00:22:22.015998 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-03-17 00:22:22.016016 | orchestrator | changed: [testbed-manager] 2026-03-17 00:22:22.016055 | orchestrator | 2026-03-17 00:22:22.016068 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-03-17 00:22:27.530976 | orchestrator | changed: [testbed-manager] 2026-03-17 00:22:27.531083 | orchestrator | 2026-03-17 00:22:27.531102 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-03-17 00:22:27.622384 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-03-17 00:22:27.622476 | orchestrator | 2026-03-17 00:22:27.622491 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-03-17 00:22:27.622503 | orchestrator | 2026-03-17 00:22:27.622515 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-03-17 00:22:27.674629 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:22:27.674716 | orchestrator | 2026-03-17 00:22:27.674731 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-03-17 00:22:27.755602 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-03-17 00:22:27.755704 | orchestrator | 2026-03-17 00:22:27.755720 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-03-17 00:22:28.558370 | orchestrator | changed: [testbed-manager] 2026-03-17 00:22:28.558468 | orchestrator | 2026-03-17 00:22:28.558485 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-03-17 00:22:31.705648 | orchestrator | ok: [testbed-manager] 2026-03-17 00:22:31.705742 | orchestrator | 2026-03-17 00:22:31.705757 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-03-17 00:22:31.782721 | orchestrator | ok: [testbed-manager] => { 2026-03-17 00:22:31.782870 | orchestrator | "version_check_result.stdout_lines": [ 2026-03-17 00:22:31.782899 | orchestrator | "=== OSISM Container Version Check ===", 2026-03-17 00:22:31.782919 | orchestrator | "Checking running containers against expected versions...", 2026-03-17 00:22:31.782940 | orchestrator | "", 2026-03-17 00:22:31.782964 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-03-17 00:22:31.782986 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:latest", 2026-03-17 00:22:31.783004 | orchestrator | " Enabled: true", 2026-03-17 00:22:31.783024 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:latest", 2026-03-17 00:22:31.783045 | orchestrator | " Status: ✅ MATCH", 2026-03-17 00:22:31.783066 | orchestrator | "", 2026-03-17 00:22:31.783087 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-03-17 00:22:31.783108 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:latest", 2026-03-17 00:22:31.783214 | orchestrator | " Enabled: true", 2026-03-17 00:22:31.783234 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:latest", 2026-03-17 00:22:31.783254 | orchestrator | " Status: ✅ MATCH", 2026-03-17 00:22:31.783274 | orchestrator | "", 2026-03-17 00:22:31.783292 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-03-17 00:22:31.783312 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:latest", 2026-03-17 00:22:31.783331 | orchestrator | " Enabled: true", 2026-03-17 00:22:31.783350 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:latest", 2026-03-17 00:22:31.783369 | orchestrator | " Status: ✅ MATCH", 2026-03-17 00:22:31.783389 | orchestrator | "", 2026-03-17 00:22:31.783408 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-03-17 00:22:31.783426 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:reef", 2026-03-17 00:22:31.783447 | orchestrator | " Enabled: true", 2026-03-17 00:22:31.783466 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:reef", 2026-03-17 00:22:31.783484 | orchestrator | " Status: ✅ MATCH", 2026-03-17 00:22:31.783502 | orchestrator | "", 2026-03-17 00:22:31.783517 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-03-17 00:22:31.783535 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:2024.2", 2026-03-17 00:22:31.783593 | orchestrator | " Enabled: true", 2026-03-17 00:22:31.783613 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:2024.2", 2026-03-17 00:22:31.783631 | orchestrator | " Status: ✅ MATCH", 2026-03-17 00:22:31.783649 | orchestrator | "", 2026-03-17 00:22:31.783668 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-03-17 00:22:31.783686 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-17 00:22:31.783704 | orchestrator | " Enabled: true", 2026-03-17 00:22:31.783722 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-17 00:22:31.783742 | orchestrator | " Status: ✅ MATCH", 2026-03-17 00:22:31.783761 | orchestrator | "", 2026-03-17 00:22:31.783778 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-03-17 00:22:31.783797 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-03-17 00:22:31.783816 | orchestrator | " Enabled: true", 2026-03-17 00:22:31.783835 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-03-17 00:22:31.783854 | orchestrator | " Status: ✅ MATCH", 2026-03-17 00:22:31.783870 | orchestrator | "", 2026-03-17 00:22:31.783885 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-03-17 00:22:31.783901 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-03-17 00:22:31.783919 | orchestrator | " Enabled: true", 2026-03-17 00:22:31.783937 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-03-17 00:22:31.783955 | orchestrator | " Status: ✅ MATCH", 2026-03-17 00:22:31.783973 | orchestrator | "", 2026-03-17 00:22:31.784006 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-03-17 00:22:31.784027 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:latest", 2026-03-17 00:22:31.784050 | orchestrator | " Enabled: true", 2026-03-17 00:22:31.784068 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:latest", 2026-03-17 00:22:31.784085 | orchestrator | " Status: ✅ MATCH", 2026-03-17 00:22:31.784102 | orchestrator | "", 2026-03-17 00:22:31.784148 | orchestrator | "Checking service: redis (Redis Cache)", 2026-03-17 00:22:31.784166 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-03-17 00:22:31.784185 | orchestrator | " Enabled: true", 2026-03-17 00:22:31.784204 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-03-17 00:22:31.784222 | orchestrator | " Status: ✅ MATCH", 2026-03-17 00:22:31.784238 | orchestrator | "", 2026-03-17 00:22:31.784253 | orchestrator | "Checking service: api (OSISM API Service)", 2026-03-17 00:22:31.784268 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-17 00:22:31.784286 | orchestrator | " Enabled: true", 2026-03-17 00:22:31.784304 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-17 00:22:31.784323 | orchestrator | " Status: ✅ MATCH", 2026-03-17 00:22:31.784342 | orchestrator | "", 2026-03-17 00:22:31.784358 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-03-17 00:22:31.784376 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-17 00:22:31.784393 | orchestrator | " Enabled: true", 2026-03-17 00:22:31.784410 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-17 00:22:31.784429 | orchestrator | " Status: ✅ MATCH", 2026-03-17 00:22:31.784447 | orchestrator | "", 2026-03-17 00:22:31.784465 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-03-17 00:22:31.784484 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-17 00:22:31.784502 | orchestrator | " Enabled: true", 2026-03-17 00:22:31.784519 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-17 00:22:31.784538 | orchestrator | " Status: ✅ MATCH", 2026-03-17 00:22:31.784556 | orchestrator | "", 2026-03-17 00:22:31.784575 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-03-17 00:22:31.784593 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-17 00:22:31.784612 | orchestrator | " Enabled: true", 2026-03-17 00:22:31.784628 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-17 00:22:31.784646 | orchestrator | " Status: ✅ MATCH", 2026-03-17 00:22:31.784681 | orchestrator | "", 2026-03-17 00:22:31.784701 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-03-17 00:22:31.784748 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-17 00:22:31.784769 | orchestrator | " Enabled: true", 2026-03-17 00:22:31.784787 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-17 00:22:31.784805 | orchestrator | " Status: ✅ MATCH", 2026-03-17 00:22:31.784822 | orchestrator | "", 2026-03-17 00:22:31.784839 | orchestrator | "=== Summary ===", 2026-03-17 00:22:31.784857 | orchestrator | "Errors (version mismatches): 0", 2026-03-17 00:22:31.784875 | orchestrator | "Warnings (expected containers not running): 0", 2026-03-17 00:22:31.784893 | orchestrator | "", 2026-03-17 00:22:31.784911 | orchestrator | "✅ All running containers match expected versions!" 2026-03-17 00:22:31.784927 | orchestrator | ] 2026-03-17 00:22:31.784945 | orchestrator | } 2026-03-17 00:22:31.784963 | orchestrator | 2026-03-17 00:22:31.784981 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-03-17 00:22:31.852060 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:22:31.852195 | orchestrator | 2026-03-17 00:22:31.852211 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:22:31.852224 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2026-03-17 00:22:31.852236 | orchestrator | 2026-03-17 00:22:31.946881 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-17 00:22:31.946969 | orchestrator | + deactivate 2026-03-17 00:22:31.946984 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-03-17 00:22:31.947000 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-17 00:22:31.947011 | orchestrator | + export PATH 2026-03-17 00:22:31.947023 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-03-17 00:22:31.947035 | orchestrator | + '[' -n '' ']' 2026-03-17 00:22:31.947046 | orchestrator | + hash -r 2026-03-17 00:22:31.947058 | orchestrator | + '[' -n '' ']' 2026-03-17 00:22:31.947068 | orchestrator | + unset VIRTUAL_ENV 2026-03-17 00:22:31.947079 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-03-17 00:22:31.947091 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-03-17 00:22:31.947101 | orchestrator | + unset -f deactivate 2026-03-17 00:22:31.947145 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-03-17 00:22:31.955627 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-17 00:22:31.955706 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-03-17 00:22:31.955721 | orchestrator | + local max_attempts=60 2026-03-17 00:22:31.955734 | orchestrator | + local name=ceph-ansible 2026-03-17 00:22:31.955745 | orchestrator | + local attempt_num=1 2026-03-17 00:22:31.956374 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-17 00:22:31.995080 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-17 00:22:31.995190 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-03-17 00:22:31.995204 | orchestrator | + local max_attempts=60 2026-03-17 00:22:31.995216 | orchestrator | + local name=kolla-ansible 2026-03-17 00:22:31.995226 | orchestrator | + local attempt_num=1 2026-03-17 00:22:31.996087 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-03-17 00:22:32.035674 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-17 00:22:32.035758 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-03-17 00:22:32.035772 | orchestrator | + local max_attempts=60 2026-03-17 00:22:32.035784 | orchestrator | + local name=osism-ansible 2026-03-17 00:22:32.035795 | orchestrator | + local attempt_num=1 2026-03-17 00:22:32.036694 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-03-17 00:22:32.070336 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-17 00:22:32.070417 | orchestrator | + [[ true == \t\r\u\e ]] 2026-03-17 00:22:32.070432 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-03-17 00:22:32.657299 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-03-17 00:22:32.837521 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-03-17 00:22:32.837649 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2026-03-17 00:22:32.837666 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2026-03-17 00:22:32.837678 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2026-03-17 00:22:32.837691 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2026-03-17 00:22:32.837703 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2026-03-17 00:22:32.837713 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2026-03-17 00:22:32.837724 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 52 seconds (healthy) 2026-03-17 00:22:32.837751 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2026-03-17 00:22:32.837763 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2026-03-17 00:22:32.837774 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2026-03-17 00:22:32.837785 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2026-03-17 00:22:32.837795 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2026-03-17 00:22:32.837806 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" frontend About a minute ago Up About a minute 192.168.16.5:3000->3000/tcp 2026-03-17 00:22:32.837817 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2026-03-17 00:22:32.837828 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2026-03-17 00:22:32.841429 | orchestrator | ++ semver latest 7.0.0 2026-03-17 00:22:32.884434 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-17 00:22:32.884525 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-17 00:22:32.884542 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-03-17 00:22:32.888346 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-03-17 00:22:45.232412 | orchestrator | 2026-03-17 00:22:45 | INFO  | Prepare task for execution of resolvconf. 2026-03-17 00:22:45.407161 | orchestrator | 2026-03-17 00:22:45 | INFO  | Task 656952ad-c7b7-421e-849d-ec583390b82b (resolvconf) was prepared for execution. 2026-03-17 00:22:45.407275 | orchestrator | 2026-03-17 00:22:45 | INFO  | It takes a moment until task 656952ad-c7b7-421e-849d-ec583390b82b (resolvconf) has been started and output is visible here. 2026-03-17 00:22:57.618176 | orchestrator | 2026-03-17 00:22:57.618291 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-03-17 00:22:57.618309 | orchestrator | 2026-03-17 00:22:57.618322 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-17 00:22:57.618334 | orchestrator | Tuesday 17 March 2026 00:22:48 +0000 (0:00:00.158) 0:00:00.158 ********* 2026-03-17 00:22:57.618345 | orchestrator | ok: [testbed-manager] 2026-03-17 00:22:57.618357 | orchestrator | 2026-03-17 00:22:57.618369 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-03-17 00:22:57.618381 | orchestrator | Tuesday 17 March 2026 00:22:51 +0000 (0:00:03.425) 0:00:03.584 ********* 2026-03-17 00:22:57.618392 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:22:57.618403 | orchestrator | 2026-03-17 00:22:57.618414 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-03-17 00:22:57.618425 | orchestrator | Tuesday 17 March 2026 00:22:51 +0000 (0:00:00.053) 0:00:03.637 ********* 2026-03-17 00:22:57.618436 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-03-17 00:22:57.618448 | orchestrator | 2026-03-17 00:22:57.618459 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-03-17 00:22:57.618471 | orchestrator | Tuesday 17 March 2026 00:22:51 +0000 (0:00:00.063) 0:00:03.701 ********* 2026-03-17 00:22:57.618493 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-03-17 00:22:57.618505 | orchestrator | 2026-03-17 00:22:57.618517 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-03-17 00:22:57.618528 | orchestrator | Tuesday 17 March 2026 00:22:51 +0000 (0:00:00.072) 0:00:03.773 ********* 2026-03-17 00:22:57.618539 | orchestrator | ok: [testbed-manager] 2026-03-17 00:22:57.618550 | orchestrator | 2026-03-17 00:22:57.618561 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-03-17 00:22:57.618572 | orchestrator | Tuesday 17 March 2026 00:22:52 +0000 (0:00:01.097) 0:00:04.870 ********* 2026-03-17 00:22:57.618583 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:22:57.618594 | orchestrator | 2026-03-17 00:22:57.618605 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-03-17 00:22:57.618616 | orchestrator | Tuesday 17 March 2026 00:22:52 +0000 (0:00:00.057) 0:00:04.928 ********* 2026-03-17 00:22:57.618630 | orchestrator | ok: [testbed-manager] 2026-03-17 00:22:57.618643 | orchestrator | 2026-03-17 00:22:57.618655 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-03-17 00:22:57.618668 | orchestrator | Tuesday 17 March 2026 00:22:53 +0000 (0:00:00.553) 0:00:05.481 ********* 2026-03-17 00:22:57.618681 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:22:57.618693 | orchestrator | 2026-03-17 00:22:57.618706 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-03-17 00:22:57.618719 | orchestrator | Tuesday 17 March 2026 00:22:53 +0000 (0:00:00.076) 0:00:05.558 ********* 2026-03-17 00:22:57.618731 | orchestrator | changed: [testbed-manager] 2026-03-17 00:22:57.618744 | orchestrator | 2026-03-17 00:22:57.618756 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-03-17 00:22:57.618769 | orchestrator | Tuesday 17 March 2026 00:22:54 +0000 (0:00:00.582) 0:00:06.141 ********* 2026-03-17 00:22:57.618781 | orchestrator | changed: [testbed-manager] 2026-03-17 00:22:57.618793 | orchestrator | 2026-03-17 00:22:57.618805 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-03-17 00:22:57.618818 | orchestrator | Tuesday 17 March 2026 00:22:55 +0000 (0:00:01.070) 0:00:07.211 ********* 2026-03-17 00:22:57.618830 | orchestrator | ok: [testbed-manager] 2026-03-17 00:22:57.618843 | orchestrator | 2026-03-17 00:22:57.618878 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-03-17 00:22:57.618891 | orchestrator | Tuesday 17 March 2026 00:22:56 +0000 (0:00:00.974) 0:00:08.185 ********* 2026-03-17 00:22:57.618904 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-03-17 00:22:57.618917 | orchestrator | 2026-03-17 00:22:57.618929 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-03-17 00:22:57.618942 | orchestrator | Tuesday 17 March 2026 00:22:56 +0000 (0:00:00.093) 0:00:08.279 ********* 2026-03-17 00:22:57.618955 | orchestrator | changed: [testbed-manager] 2026-03-17 00:22:57.618967 | orchestrator | 2026-03-17 00:22:57.618979 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:22:57.618991 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-17 00:22:57.619002 | orchestrator | 2026-03-17 00:22:57.619013 | orchestrator | 2026-03-17 00:22:57.619024 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:22:57.619035 | orchestrator | Tuesday 17 March 2026 00:22:57 +0000 (0:00:01.136) 0:00:09.415 ********* 2026-03-17 00:22:57.619046 | orchestrator | =============================================================================== 2026-03-17 00:22:57.619057 | orchestrator | Gathering Facts --------------------------------------------------------- 3.43s 2026-03-17 00:22:57.619067 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.14s 2026-03-17 00:22:57.619078 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.10s 2026-03-17 00:22:57.619089 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.07s 2026-03-17 00:22:57.619100 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.97s 2026-03-17 00:22:57.619131 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.58s 2026-03-17 00:22:57.619161 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.55s 2026-03-17 00:22:57.619173 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.09s 2026-03-17 00:22:57.619184 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2026-03-17 00:22:57.619195 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.07s 2026-03-17 00:22:57.619206 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.06s 2026-03-17 00:22:57.619217 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2026-03-17 00:22:57.619228 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.05s 2026-03-17 00:22:57.744396 | orchestrator | + osism apply sshconfig 2026-03-17 00:23:08.942736 | orchestrator | 2026-03-17 00:23:08 | INFO  | Prepare task for execution of sshconfig. 2026-03-17 00:23:09.011373 | orchestrator | 2026-03-17 00:23:09 | INFO  | Task 5b1b270b-714e-4045-ae8f-0694a6943145 (sshconfig) was prepared for execution. 2026-03-17 00:23:09.011466 | orchestrator | 2026-03-17 00:23:09 | INFO  | It takes a moment until task 5b1b270b-714e-4045-ae8f-0694a6943145 (sshconfig) has been started and output is visible here. 2026-03-17 00:23:20.138724 | orchestrator | 2026-03-17 00:23:20.138812 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-03-17 00:23:20.138823 | orchestrator | 2026-03-17 00:23:20.138830 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-03-17 00:23:20.138837 | orchestrator | Tuesday 17 March 2026 00:23:12 +0000 (0:00:00.190) 0:00:00.190 ********* 2026-03-17 00:23:20.138844 | orchestrator | ok: [testbed-manager] 2026-03-17 00:23:20.138851 | orchestrator | 2026-03-17 00:23:20.138857 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-03-17 00:23:20.138863 | orchestrator | Tuesday 17 March 2026 00:23:13 +0000 (0:00:00.880) 0:00:01.071 ********* 2026-03-17 00:23:20.138889 | orchestrator | changed: [testbed-manager] 2026-03-17 00:23:20.138897 | orchestrator | 2026-03-17 00:23:20.138903 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-03-17 00:23:20.138910 | orchestrator | Tuesday 17 March 2026 00:23:13 +0000 (0:00:00.567) 0:00:01.638 ********* 2026-03-17 00:23:20.138916 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-03-17 00:23:20.138922 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-03-17 00:23:20.138929 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-03-17 00:23:20.138935 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-03-17 00:23:20.138941 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-03-17 00:23:20.138947 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-03-17 00:23:20.138953 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-03-17 00:23:20.138959 | orchestrator | 2026-03-17 00:23:20.138965 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-03-17 00:23:20.138971 | orchestrator | Tuesday 17 March 2026 00:23:19 +0000 (0:00:05.720) 0:00:07.359 ********* 2026-03-17 00:23:20.138977 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:23:20.138983 | orchestrator | 2026-03-17 00:23:20.138990 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-03-17 00:23:20.138996 | orchestrator | Tuesday 17 March 2026 00:23:19 +0000 (0:00:00.104) 0:00:07.464 ********* 2026-03-17 00:23:20.139002 | orchestrator | changed: [testbed-manager] 2026-03-17 00:23:20.139008 | orchestrator | 2026-03-17 00:23:20.139014 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:23:20.139022 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-17 00:23:20.139028 | orchestrator | 2026-03-17 00:23:20.139035 | orchestrator | 2026-03-17 00:23:20.139041 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:23:20.139047 | orchestrator | Tuesday 17 March 2026 00:23:19 +0000 (0:00:00.535) 0:00:07.999 ********* 2026-03-17 00:23:20.139053 | orchestrator | =============================================================================== 2026-03-17 00:23:20.139059 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.72s 2026-03-17 00:23:20.139065 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.88s 2026-03-17 00:23:20.139071 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.57s 2026-03-17 00:23:20.139077 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.54s 2026-03-17 00:23:20.139084 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.10s 2026-03-17 00:23:20.307529 | orchestrator | + osism apply known-hosts 2026-03-17 00:23:31.747463 | orchestrator | 2026-03-17 00:23:31 | INFO  | Prepare task for execution of known-hosts. 2026-03-17 00:23:31.830830 | orchestrator | 2026-03-17 00:23:31 | INFO  | Task 645a92d5-d533-48f1-b5d9-6096b015631a (known-hosts) was prepared for execution. 2026-03-17 00:23:31.830925 | orchestrator | 2026-03-17 00:23:31 | INFO  | It takes a moment until task 645a92d5-d533-48f1-b5d9-6096b015631a (known-hosts) has been started and output is visible here. 2026-03-17 00:23:47.013411 | orchestrator | 2026-03-17 00:23:47.013516 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-03-17 00:23:47.013532 | orchestrator | 2026-03-17 00:23:47.013545 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-03-17 00:23:47.013557 | orchestrator | Tuesday 17 March 2026 00:23:34 +0000 (0:00:00.194) 0:00:00.194 ********* 2026-03-17 00:23:47.013568 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-03-17 00:23:47.013580 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-03-17 00:23:47.013591 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-03-17 00:23:47.013625 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-03-17 00:23:47.013636 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-03-17 00:23:47.013647 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-03-17 00:23:47.013657 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-03-17 00:23:47.013668 | orchestrator | 2026-03-17 00:23:47.013679 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-03-17 00:23:47.013691 | orchestrator | Tuesday 17 March 2026 00:23:41 +0000 (0:00:06.400) 0:00:06.594 ********* 2026-03-17 00:23:47.013714 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-03-17 00:23:47.013747 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-03-17 00:23:47.013759 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-03-17 00:23:47.013783 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-03-17 00:23:47.013794 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-03-17 00:23:47.013805 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-03-17 00:23:47.013816 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-03-17 00:23:47.013827 | orchestrator | 2026-03-17 00:23:47.013838 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-17 00:23:47.013849 | orchestrator | Tuesday 17 March 2026 00:23:41 +0000 (0:00:00.162) 0:00:06.757 ********* 2026-03-17 00:23:47.013860 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAII+uFR4Io1ZqZfcITmVHm2DQZ5bHHvZkq2Rrt068S27Y) 2026-03-17 00:23:47.013876 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCcP0rpa+DKazLZvB7qBSMKhgmt/fbFL/fXTlx0UkZgXleHQF0UOPzTw1tVml0Z+kfmyUNVDsehFH3I/LGDSkqLOpWCLAqfuqUpsOnthRS/aWJI8dw94EIYiuY4wHtEa2lMO6S6BjILG0+PnevdbZ0C/B+Kx6EYGN3W1Adog6n3PwOGQFM9WZX0jeJ7oLO+Ow0p+FqZALtwsusHsHNpPgkNCE0CpBcuT3zEiV5d16K7Tb66NGdlXRCjWTySsTVJxJ0MpghGlriaz3gBBigyOQRgdDFAvow5teKvtyPg3g61B1ry+ZNnAodkYABe9dkZwPZ30lsfDesOUkiYK7V4iDuFXxsP/qHth67xE3f1Aqkjy0LEyL4inbUBQUUHrBwuvFTKdcbr8r4DPUXoaZt7TUPjCS0iG4cIfwhGcF9Di3xvTgrTLXT0dFdZTUtW6MqytLX3rfg8XuO2LFIAZCxGcjNFTAD2y5SFdfMf7M66ICWgs7v1Ab58LuEQXfbhOHYn2Z0=) 2026-03-17 00:23:47.013891 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGY0S/7zkIu2d69RYFu2a8gyLy15u+84UBPhmvaR7oUrlQYc3Yl7VhNXkcnqtUq3P1YCOz8w9ujVCfiTzEUPF+4=) 2026-03-17 00:23:47.013904 | orchestrator | 2026-03-17 00:23:47.013915 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-17 00:23:47.013928 | orchestrator | Tuesday 17 March 2026 00:23:42 +0000 (0:00:01.229) 0:00:07.986 ********* 2026-03-17 00:23:47.013941 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBD59idHTmhtofO5IuldpTO6Qd4YA0jVMRko0M1qG3TsuxMQC6sJANC7qMGUQUdp/UCmhv+H0z5tVjUjgkFES0/w=) 2026-03-17 00:23:47.013954 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBnUhaTmSmVnlTByd4CayQzr+FIIjjVEcwO+v3xCssJZ) 2026-03-17 00:23:47.014003 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCmNYWjWkp2kz9Ba55pLI6okUtv4uM/QTXJ/MU71SX161HEkv6sZppi1DD50jNjcPZc9UQc24KoJegxfKlfxDfqPh6U3XbY9yWh9wP/KQzxbkSSox3fJCYRkDCT+p6BJtV6jNANrtQRS8bIeexkOAheDf73WDRWxu8tcr38xd71WO2/5iTIPRAgNDNnHKmDNWeDVymQeGHb9IDWd3TsWxvvxyhxtdg8roqdJSV5XtnHuIDqnxIo+XT9j6KnirYLa7GcSyXXRRdq1NlGhXmnqWWN2+0qbvPP4MCACv9DdhQvuLh8VGXdbQU5pp/cLdPqtFlMdXfsVuIDPos96nmWpIfz6Qxwcw0D5J9MijopD+bwnJdbPjV65OyJ5MRN002Qj+bepxUrylJyx/JXzIAhzQed/Qp7CiEUQ8Ds7tgbk24z7KNv/4sVXEVL+CVvZkzquAxsaEItuCKnWmjoGepPQI3CkeIgSUOequZsONk1LgyRFF545zTZ6bWrWVV4dqmePvs=) 2026-03-17 00:23:47.014079 | orchestrator | 2026-03-17 00:23:47.014094 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-17 00:23:47.014132 | orchestrator | Tuesday 17 March 2026 00:23:43 +0000 (0:00:01.022) 0:00:09.008 ********* 2026-03-17 00:23:47.014144 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBByEDWJSCJMIYBNDI9qCaVZ+xDHXsuH2HNRzoGafe+Iv3LGmRh70gkC23STI0Lh0RD+kSwfhDEBcZWrMIttuAHo=) 2026-03-17 00:23:47.014156 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJMQW8JXMmMSSypX5XSkcXe1Kl0K0YF05KYkHuTCZoD1) 2026-03-17 00:23:47.014259 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDJgMa0EvF00arCg20DLywYaLNNsfBwRdobvsQSGBQUpYonHlkDNa8YZu/HjxQPY9Hb7oDtHaLyBplv1rgV2MUaC7Tta5virrLeHlYIT3ywKakx8yQXoO3sh/RKUmwRK3icxrySLuAQiLw3ewMa491D+bA8QSxZX74HQmsmzo6dcl80iL/cD2zG+i2iiJVtlfmr3HY2X2wDSh/HIabZKGYFpYC59rJ6OQtw00QUkUejg6ninVun+UyTZueOfiQZ/vsjidMXu8UteNU5njZUzkEd8ZsKDjjhR36QFL862aw4zgLS3X+GnoDMTSTF6aDjtPOa/iOgiMXpQ7q6O421inGKpmkQEWNrqeJFrfyGaAXJUNeZrAuW1sdQgWwZqylw+4j6W2SVvsV+YL9r/1bEIQZbgIH2cQN9kfylX7f4NdeJHPhYtr5PNMVrdkONHDgSjgv22oWPU2y9VvcVkVGRW+P61jtuOylR10FevQdddrVwE9wgmoGQoP5BeFJOB34Zyt0=) 2026-03-17 00:23:47.014274 | orchestrator | 2026-03-17 00:23:47.014287 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-17 00:23:47.014298 | orchestrator | Tuesday 17 March 2026 00:23:44 +0000 (0:00:00.969) 0:00:09.978 ********* 2026-03-17 00:23:47.014313 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHkLOFDdO+IgIKzaB9STrmzB3nzNuMtcnWTx5cJ1/79xrlXuXz2geZTgpMR1S5eEm2le+EgZa9PICUCr/hdZv+Q=) 2026-03-17 00:23:47.014326 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCJe5GW97g+b8aGs9TXssgFUmm1gPf225wWX8IxzUdT1OBrtaoeM6ZyJNqEGg5pAkeshSmm4iJP+iCClAb7atfTZLqRjZ/8Gxy+fAywyvyZDQ5gO9My5LOsszOFArymD4Jw41WBJ6u8gSbNulbZvYL2Zgxy3R4K+9vaiE2dvqO2yZlOsEJJ74i1LrLibkCsUgj5zkryyXfNYSrP2Fel5XdQotYahwRarWkjdfoZsHYJkYX4CkZ+f3AaLuv6tQ09nUj26XJFCtgPJe4PMnOIAJ9MqtHtFR13D3UsCnWSuCxaijOdhlkpFgOinZqknnWRi8HyqM9nX4M0ib4Auhi6maFNF+JpF/MMafvtG90xk3EnAEsOUaoXvuqC3SCeXUCZ6S8IgpXPulb8GrDXgdpwptG91Bzh6w1ZsJKiPKymQ3s55ce62zlSWCRdVmxJh/iosXvgaqiU6f6dY/kenwXCEWPRvONM9LWUHw/aBQDit3QxPd+/O4sfMWz298spdbpJD28=) 2026-03-17 00:23:47.014337 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGxt0B2LjiohtGbqNBtQq4MMhpVB5Shr6BDQEjdAhHQ+) 2026-03-17 00:23:47.014348 | orchestrator | 2026-03-17 00:23:47.014360 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-17 00:23:47.014371 | orchestrator | Tuesday 17 March 2026 00:23:45 +0000 (0:00:00.927) 0:00:10.905 ********* 2026-03-17 00:23:47.014382 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDmEh1UG1wYZYPw7F4BHl3MxxzkOuIyQujU5q4CKokDDZzLP2pTVOBWBYfPqHPXobzAl3FLhEFN5f2FNub2Rh2cmr7bXRYR1xCySu9c6HVWnnpxxxJt8xMEMvuXK8jz5shW1Yt7NsOJm3NGbUg7T3iTxO5xRoS+qJVucraZ6Oyl392+nZ8hb7XTjlVdOQydUj/15ab+I3uxMg8eMnsVhEOoSBCQpQNTGtaSpMkWg18Pgu/RMobHm18aC5Ng1tb35LF8zxJsMOhxeKwySIwr+P+03HD0ODh3hvsYYgHO59RO6GApGrcRX2tEiYmB77dDZE4hgSGhPeWSpLhwM+AYq9m3SV4K6K+wwuVFpfevprdoL4VuY0wOcCI9Pt8Vs5foOncAlse2G5NQs4iHtbV1YlB9aKVmHxvVMC7lk42yW19sYXnx46mjW7WCG8VzZUSvcCZaGQK6jxQsLnTC/YOp3Zmv0Ndi6fIYhZvQeyEZzCHtreEJWQoaHl9c8lyeEXipTgc=) 2026-03-17 00:23:47.014402 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAA22HuXQKLKGLKr4SxmYIcFGSThTACvp6bEOWe+Q5IQ) 2026-03-17 00:23:47.014413 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBN9dO+vPZWlpRrbzOZILrzcld36wlJWoLtFNVhlEj7OCln/knYo0iQVfoexyMhNIo/p0Q1H1fuVXA3GuJSHg8Fc=) 2026-03-17 00:23:47.014424 | orchestrator | 2026-03-17 00:23:47.014435 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-17 00:23:47.014446 | orchestrator | Tuesday 17 March 2026 00:23:46 +0000 (0:00:00.949) 0:00:11.854 ********* 2026-03-17 00:23:47.014468 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCxf++mx8XTRm0D7RlJe6C3TscJWQm5Tdw98GOewc09wHQbjoQ4Rmb8hm24l1UgHhDTCeUuU/VZeI2ffQMPSkG6wPP9hdcgQSykwzmGeg7Rz3/3Rny5bxNEg7QojrQXrGWDDNQaFT3PVhdCrjtBMv4e0QdYKwr8+/RmVgGo9Lisuoff5c8sxSiz5SRbOELL7UdzpKOHRzVQYwhFsIUD8xLxMquH5ox29eNmdz6MjhFaHfmWewflHzItfaeKDIa5w+TMxbpSyyw1TmwgiMAiuxatIlMboS4KGF8e/gMmpJ4/5VjYJ31jD7k+mvBveSz8lP7QKmVdpq7LKa1qDHp/DrSUzMnV2GP7ZPuocM9J0milldoWYioOxsrByYjGehmOFy435AADXwbAeyaAnVmusG8Q2uPH2Jq+1IXuiJGzb+kE1+wobdkLrOSB6minduF9154F086taSLCWixkoPjgb0+faOCxPAqXho94TR8m6z+oRZCmHeJz+UtD+fPHFVoAMKs=) 2026-03-17 00:23:58.613514 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIvgu21OOVCnSijusJAFQsVIxiNM3GxnZCKDciCO3HW0) 2026-03-17 00:23:58.613627 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOVeXCSUkTc9ZM86X+RsUVLgFO2UZMCa1apue7NshMK9o/BPRlUtGm/Qcinj8SH6AVobrcdMkz65wAq+i6p5ZFA=) 2026-03-17 00:23:58.613646 | orchestrator | 2026-03-17 00:23:58.613659 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-17 00:23:58.613672 | orchestrator | Tuesday 17 March 2026 00:23:47 +0000 (0:00:01.071) 0:00:12.925 ********* 2026-03-17 00:23:58.613683 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJZWCq+dJnADZhCGErSRegTH4GMfNXXPpEthUaBHdFI4dsNIj616cWbB+HEi55q+GrMyrxuPG4OFWaOPsksP+DE=) 2026-03-17 00:23:58.613697 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCqTuZeRRgyvzL9EHXW9BF8DfyQ3hmPrIo959LkBGJsfjJPrcaKGy5Hwm9rO+evkcEoC5XNE5bYnNThJ5cce71Hai2jAvkWBQHRQYMi8L8nnLTwnu/TFavxKoZ2im1B+RTIa0IrkBbGlx5qLwRfzZ1AhjbmSeDvgkhO9ysA2o0dKSqwjsPsqPBw3fAfjvf3E0OHgvcZfIczLKNomX11RBCF/Pw0maqtAv4D8bOruBH3vRlqJdPrpUDXKbklwhBDU9B7XwBMkKS7ef+bkNt9+zZJFG8Xdr0na8IADv7CTPZUm3tR1k9QTNfIaT09AB7VfWWfUH8adSpQIdSFpmvRN5/43i0WrSzq5cPXbClhXpdGiARw12ZN4sWocyBTSALtrbw9KyBfTpnPCtbLmkHCO3DlE3lupoQCEXndPy3ZLkxLGvG34LH2XNdpILDPQU31WMEDPJ3jLOWaxh1DlKtB1hgxin6eHbVUWX7/Y6el0GZrS6xvkbWPQODlOqB9K5m7iWk=) 2026-03-17 00:23:58.613711 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKYVmqAuLz62om1iiD9I+2yGn6pEzXn8R88Eg+2Q0ixm) 2026-03-17 00:23:58.613722 | orchestrator | 2026-03-17 00:23:58.613734 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-03-17 00:23:58.613746 | orchestrator | Tuesday 17 March 2026 00:23:48 +0000 (0:00:01.039) 0:00:13.965 ********* 2026-03-17 00:23:58.613757 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-03-17 00:23:58.613769 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-03-17 00:23:58.613780 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-03-17 00:23:58.613791 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-03-17 00:23:58.613802 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-03-17 00:23:58.613832 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-03-17 00:23:58.613844 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-03-17 00:23:58.613876 | orchestrator | 2026-03-17 00:23:58.613888 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-03-17 00:23:58.613900 | orchestrator | Tuesday 17 March 2026 00:23:54 +0000 (0:00:05.422) 0:00:19.387 ********* 2026-03-17 00:23:58.613911 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-03-17 00:23:58.613924 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-03-17 00:23:58.613935 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-03-17 00:23:58.613946 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-03-17 00:23:58.613957 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-03-17 00:23:58.613968 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-03-17 00:23:58.613979 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-03-17 00:23:58.613990 | orchestrator | 2026-03-17 00:23:58.614001 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-17 00:23:58.614012 | orchestrator | Tuesday 17 March 2026 00:23:54 +0000 (0:00:00.168) 0:00:19.556 ********* 2026-03-17 00:23:58.614095 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAII+uFR4Io1ZqZfcITmVHm2DQZ5bHHvZkq2Rrt068S27Y) 2026-03-17 00:23:58.614171 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCcP0rpa+DKazLZvB7qBSMKhgmt/fbFL/fXTlx0UkZgXleHQF0UOPzTw1tVml0Z+kfmyUNVDsehFH3I/LGDSkqLOpWCLAqfuqUpsOnthRS/aWJI8dw94EIYiuY4wHtEa2lMO6S6BjILG0+PnevdbZ0C/B+Kx6EYGN3W1Adog6n3PwOGQFM9WZX0jeJ7oLO+Ow0p+FqZALtwsusHsHNpPgkNCE0CpBcuT3zEiV5d16K7Tb66NGdlXRCjWTySsTVJxJ0MpghGlriaz3gBBigyOQRgdDFAvow5teKvtyPg3g61B1ry+ZNnAodkYABe9dkZwPZ30lsfDesOUkiYK7V4iDuFXxsP/qHth67xE3f1Aqkjy0LEyL4inbUBQUUHrBwuvFTKdcbr8r4DPUXoaZt7TUPjCS0iG4cIfwhGcF9Di3xvTgrTLXT0dFdZTUtW6MqytLX3rfg8XuO2LFIAZCxGcjNFTAD2y5SFdfMf7M66ICWgs7v1Ab58LuEQXfbhOHYn2Z0=) 2026-03-17 00:23:58.614201 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGY0S/7zkIu2d69RYFu2a8gyLy15u+84UBPhmvaR7oUrlQYc3Yl7VhNXkcnqtUq3P1YCOz8w9ujVCfiTzEUPF+4=) 2026-03-17 00:23:58.614214 | orchestrator | 2026-03-17 00:23:58.614237 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-17 00:23:58.614253 | orchestrator | Tuesday 17 March 2026 00:23:55 +0000 (0:00:01.115) 0:00:20.671 ********* 2026-03-17 00:23:58.614272 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBD59idHTmhtofO5IuldpTO6Qd4YA0jVMRko0M1qG3TsuxMQC6sJANC7qMGUQUdp/UCmhv+H0z5tVjUjgkFES0/w=) 2026-03-17 00:23:58.614293 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCmNYWjWkp2kz9Ba55pLI6okUtv4uM/QTXJ/MU71SX161HEkv6sZppi1DD50jNjcPZc9UQc24KoJegxfKlfxDfqPh6U3XbY9yWh9wP/KQzxbkSSox3fJCYRkDCT+p6BJtV6jNANrtQRS8bIeexkOAheDf73WDRWxu8tcr38xd71WO2/5iTIPRAgNDNnHKmDNWeDVymQeGHb9IDWd3TsWxvvxyhxtdg8roqdJSV5XtnHuIDqnxIo+XT9j6KnirYLa7GcSyXXRRdq1NlGhXmnqWWN2+0qbvPP4MCACv9DdhQvuLh8VGXdbQU5pp/cLdPqtFlMdXfsVuIDPos96nmWpIfz6Qxwcw0D5J9MijopD+bwnJdbPjV65OyJ5MRN002Qj+bepxUrylJyx/JXzIAhzQed/Qp7CiEUQ8Ds7tgbk24z7KNv/4sVXEVL+CVvZkzquAxsaEItuCKnWmjoGepPQI3CkeIgSUOequZsONk1LgyRFF545zTZ6bWrWVV4dqmePvs=) 2026-03-17 00:23:58.614327 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBnUhaTmSmVnlTByd4CayQzr+FIIjjVEcwO+v3xCssJZ) 2026-03-17 00:23:58.614346 | orchestrator | 2026-03-17 00:23:58.614364 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-17 00:23:58.614384 | orchestrator | Tuesday 17 March 2026 00:23:56 +0000 (0:00:01.101) 0:00:21.773 ********* 2026-03-17 00:23:58.614403 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJMQW8JXMmMSSypX5XSkcXe1Kl0K0YF05KYkHuTCZoD1) 2026-03-17 00:23:58.614422 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDJgMa0EvF00arCg20DLywYaLNNsfBwRdobvsQSGBQUpYonHlkDNa8YZu/HjxQPY9Hb7oDtHaLyBplv1rgV2MUaC7Tta5virrLeHlYIT3ywKakx8yQXoO3sh/RKUmwRK3icxrySLuAQiLw3ewMa491D+bA8QSxZX74HQmsmzo6dcl80iL/cD2zG+i2iiJVtlfmr3HY2X2wDSh/HIabZKGYFpYC59rJ6OQtw00QUkUejg6ninVun+UyTZueOfiQZ/vsjidMXu8UteNU5njZUzkEd8ZsKDjjhR36QFL862aw4zgLS3X+GnoDMTSTF6aDjtPOa/iOgiMXpQ7q6O421inGKpmkQEWNrqeJFrfyGaAXJUNeZrAuW1sdQgWwZqylw+4j6W2SVvsV+YL9r/1bEIQZbgIH2cQN9kfylX7f4NdeJHPhYtr5PNMVrdkONHDgSjgv22oWPU2y9VvcVkVGRW+P61jtuOylR10FevQdddrVwE9wgmoGQoP5BeFJOB34Zyt0=) 2026-03-17 00:23:58.614442 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBByEDWJSCJMIYBNDI9qCaVZ+xDHXsuH2HNRzoGafe+Iv3LGmRh70gkC23STI0Lh0RD+kSwfhDEBcZWrMIttuAHo=) 2026-03-17 00:23:58.614461 | orchestrator | 2026-03-17 00:23:58.614479 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-17 00:23:58.614499 | orchestrator | Tuesday 17 March 2026 00:23:57 +0000 (0:00:01.077) 0:00:22.850 ********* 2026-03-17 00:23:58.614527 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCJe5GW97g+b8aGs9TXssgFUmm1gPf225wWX8IxzUdT1OBrtaoeM6ZyJNqEGg5pAkeshSmm4iJP+iCClAb7atfTZLqRjZ/8Gxy+fAywyvyZDQ5gO9My5LOsszOFArymD4Jw41WBJ6u8gSbNulbZvYL2Zgxy3R4K+9vaiE2dvqO2yZlOsEJJ74i1LrLibkCsUgj5zkryyXfNYSrP2Fel5XdQotYahwRarWkjdfoZsHYJkYX4CkZ+f3AaLuv6tQ09nUj26XJFCtgPJe4PMnOIAJ9MqtHtFR13D3UsCnWSuCxaijOdhlkpFgOinZqknnWRi8HyqM9nX4M0ib4Auhi6maFNF+JpF/MMafvtG90xk3EnAEsOUaoXvuqC3SCeXUCZ6S8IgpXPulb8GrDXgdpwptG91Bzh6w1ZsJKiPKymQ3s55ce62zlSWCRdVmxJh/iosXvgaqiU6f6dY/kenwXCEWPRvONM9LWUHw/aBQDit3QxPd+/O4sfMWz298spdbpJD28=) 2026-03-17 00:23:58.614544 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHkLOFDdO+IgIKzaB9STrmzB3nzNuMtcnWTx5cJ1/79xrlXuXz2geZTgpMR1S5eEm2le+EgZa9PICUCr/hdZv+Q=) 2026-03-17 00:23:58.614571 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGxt0B2LjiohtGbqNBtQq4MMhpVB5Shr6BDQEjdAhHQ+) 2026-03-17 00:24:02.865521 | orchestrator | 2026-03-17 00:24:02.865625 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-17 00:24:02.865643 | orchestrator | Tuesday 17 March 2026 00:23:58 +0000 (0:00:01.062) 0:00:23.913 ********* 2026-03-17 00:24:02.865659 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDmEh1UG1wYZYPw7F4BHl3MxxzkOuIyQujU5q4CKokDDZzLP2pTVOBWBYfPqHPXobzAl3FLhEFN5f2FNub2Rh2cmr7bXRYR1xCySu9c6HVWnnpxxxJt8xMEMvuXK8jz5shW1Yt7NsOJm3NGbUg7T3iTxO5xRoS+qJVucraZ6Oyl392+nZ8hb7XTjlVdOQydUj/15ab+I3uxMg8eMnsVhEOoSBCQpQNTGtaSpMkWg18Pgu/RMobHm18aC5Ng1tb35LF8zxJsMOhxeKwySIwr+P+03HD0ODh3hvsYYgHO59RO6GApGrcRX2tEiYmB77dDZE4hgSGhPeWSpLhwM+AYq9m3SV4K6K+wwuVFpfevprdoL4VuY0wOcCI9Pt8Vs5foOncAlse2G5NQs4iHtbV1YlB9aKVmHxvVMC7lk42yW19sYXnx46mjW7WCG8VzZUSvcCZaGQK6jxQsLnTC/YOp3Zmv0Ndi6fIYhZvQeyEZzCHtreEJWQoaHl9c8lyeEXipTgc=) 2026-03-17 00:24:02.865676 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBN9dO+vPZWlpRrbzOZILrzcld36wlJWoLtFNVhlEj7OCln/knYo0iQVfoexyMhNIo/p0Q1H1fuVXA3GuJSHg8Fc=) 2026-03-17 00:24:02.865714 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAA22HuXQKLKGLKr4SxmYIcFGSThTACvp6bEOWe+Q5IQ) 2026-03-17 00:24:02.865728 | orchestrator | 2026-03-17 00:24:02.865740 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-17 00:24:02.865767 | orchestrator | Tuesday 17 March 2026 00:23:59 +0000 (0:00:01.119) 0:00:25.033 ********* 2026-03-17 00:24:02.865780 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCxf++mx8XTRm0D7RlJe6C3TscJWQm5Tdw98GOewc09wHQbjoQ4Rmb8hm24l1UgHhDTCeUuU/VZeI2ffQMPSkG6wPP9hdcgQSykwzmGeg7Rz3/3Rny5bxNEg7QojrQXrGWDDNQaFT3PVhdCrjtBMv4e0QdYKwr8+/RmVgGo9Lisuoff5c8sxSiz5SRbOELL7UdzpKOHRzVQYwhFsIUD8xLxMquH5ox29eNmdz6MjhFaHfmWewflHzItfaeKDIa5w+TMxbpSyyw1TmwgiMAiuxatIlMboS4KGF8e/gMmpJ4/5VjYJ31jD7k+mvBveSz8lP7QKmVdpq7LKa1qDHp/DrSUzMnV2GP7ZPuocM9J0milldoWYioOxsrByYjGehmOFy435AADXwbAeyaAnVmusG8Q2uPH2Jq+1IXuiJGzb+kE1+wobdkLrOSB6minduF9154F086taSLCWixkoPjgb0+faOCxPAqXho94TR8m6z+oRZCmHeJz+UtD+fPHFVoAMKs=) 2026-03-17 00:24:02.865792 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOVeXCSUkTc9ZM86X+RsUVLgFO2UZMCa1apue7NshMK9o/BPRlUtGm/Qcinj8SH6AVobrcdMkz65wAq+i6p5ZFA=) 2026-03-17 00:24:02.865804 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIvgu21OOVCnSijusJAFQsVIxiNM3GxnZCKDciCO3HW0) 2026-03-17 00:24:02.865816 | orchestrator | 2026-03-17 00:24:02.865828 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-17 00:24:02.865840 | orchestrator | Tuesday 17 March 2026 00:24:00 +0000 (0:00:01.075) 0:00:26.109 ********* 2026-03-17 00:24:02.865852 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCqTuZeRRgyvzL9EHXW9BF8DfyQ3hmPrIo959LkBGJsfjJPrcaKGy5Hwm9rO+evkcEoC5XNE5bYnNThJ5cce71Hai2jAvkWBQHRQYMi8L8nnLTwnu/TFavxKoZ2im1B+RTIa0IrkBbGlx5qLwRfzZ1AhjbmSeDvgkhO9ysA2o0dKSqwjsPsqPBw3fAfjvf3E0OHgvcZfIczLKNomX11RBCF/Pw0maqtAv4D8bOruBH3vRlqJdPrpUDXKbklwhBDU9B7XwBMkKS7ef+bkNt9+zZJFG8Xdr0na8IADv7CTPZUm3tR1k9QTNfIaT09AB7VfWWfUH8adSpQIdSFpmvRN5/43i0WrSzq5cPXbClhXpdGiARw12ZN4sWocyBTSALtrbw9KyBfTpnPCtbLmkHCO3DlE3lupoQCEXndPy3ZLkxLGvG34LH2XNdpILDPQU31WMEDPJ3jLOWaxh1DlKtB1hgxin6eHbVUWX7/Y6el0GZrS6xvkbWPQODlOqB9K5m7iWk=) 2026-03-17 00:24:02.865865 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJZWCq+dJnADZhCGErSRegTH4GMfNXXPpEthUaBHdFI4dsNIj616cWbB+HEi55q+GrMyrxuPG4OFWaOPsksP+DE=) 2026-03-17 00:24:02.865877 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKYVmqAuLz62om1iiD9I+2yGn6pEzXn8R88Eg+2Q0ixm) 2026-03-17 00:24:02.865888 | orchestrator | 2026-03-17 00:24:02.865900 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-03-17 00:24:02.865912 | orchestrator | Tuesday 17 March 2026 00:24:01 +0000 (0:00:01.029) 0:00:27.139 ********* 2026-03-17 00:24:02.865924 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-03-17 00:24:02.865936 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-17 00:24:02.865948 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-03-17 00:24:02.865960 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-03-17 00:24:02.865971 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-03-17 00:24:02.865983 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-03-17 00:24:02.865995 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-03-17 00:24:02.866007 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:24:02.866077 | orchestrator | 2026-03-17 00:24:02.866138 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-03-17 00:24:02.866152 | orchestrator | Tuesday 17 March 2026 00:24:02 +0000 (0:00:00.174) 0:00:27.313 ********* 2026-03-17 00:24:02.866226 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:24:02.866253 | orchestrator | 2026-03-17 00:24:02.866266 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-03-17 00:24:02.866279 | orchestrator | Tuesday 17 March 2026 00:24:02 +0000 (0:00:00.040) 0:00:27.354 ********* 2026-03-17 00:24:02.866291 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:24:02.866304 | orchestrator | 2026-03-17 00:24:02.866316 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-03-17 00:24:02.866328 | orchestrator | Tuesday 17 March 2026 00:24:02 +0000 (0:00:00.060) 0:00:27.414 ********* 2026-03-17 00:24:02.866341 | orchestrator | changed: [testbed-manager] 2026-03-17 00:24:02.866353 | orchestrator | 2026-03-17 00:24:02.866366 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:24:02.866379 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-17 00:24:02.866393 | orchestrator | 2026-03-17 00:24:02.866405 | orchestrator | 2026-03-17 00:24:02.866416 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:24:02.866427 | orchestrator | Tuesday 17 March 2026 00:24:02 +0000 (0:00:00.495) 0:00:27.909 ********* 2026-03-17 00:24:02.866438 | orchestrator | =============================================================================== 2026-03-17 00:24:02.866448 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.40s 2026-03-17 00:24:02.866459 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.42s 2026-03-17 00:24:02.866471 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.23s 2026-03-17 00:24:02.866482 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2026-03-17 00:24:02.866493 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2026-03-17 00:24:02.866504 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2026-03-17 00:24:02.866515 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2026-03-17 00:24:02.866525 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2026-03-17 00:24:02.866536 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2026-03-17 00:24:02.866547 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2026-03-17 00:24:02.866558 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2026-03-17 00:24:02.866578 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2026-03-17 00:24:02.866589 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2026-03-17 00:24:02.866600 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.97s 2026-03-17 00:24:02.866611 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.95s 2026-03-17 00:24:02.866622 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.93s 2026-03-17 00:24:02.866633 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.50s 2026-03-17 00:24:02.866644 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.17s 2026-03-17 00:24:02.866655 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.17s 2026-03-17 00:24:02.866666 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.16s 2026-03-17 00:24:03.038706 | orchestrator | + osism apply squid 2026-03-17 00:24:14.356220 | orchestrator | 2026-03-17 00:24:14 | INFO  | Prepare task for execution of squid. 2026-03-17 00:24:14.427801 | orchestrator | 2026-03-17 00:24:14 | INFO  | Task 54a93c2b-b31b-4290-831b-a335f310311c (squid) was prepared for execution. 2026-03-17 00:24:14.427898 | orchestrator | 2026-03-17 00:24:14 | INFO  | It takes a moment until task 54a93c2b-b31b-4290-831b-a335f310311c (squid) has been started and output is visible here. 2026-03-17 00:26:10.276783 | orchestrator | 2026-03-17 00:26:10.276922 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-03-17 00:26:10.276949 | orchestrator | 2026-03-17 00:26:10.276963 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-03-17 00:26:10.276976 | orchestrator | Tuesday 17 March 2026 00:24:17 +0000 (0:00:00.189) 0:00:00.189 ********* 2026-03-17 00:26:10.276987 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-03-17 00:26:10.276999 | orchestrator | 2026-03-17 00:26:10.277011 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-03-17 00:26:10.277022 | orchestrator | Tuesday 17 March 2026 00:24:17 +0000 (0:00:00.076) 0:00:00.265 ********* 2026-03-17 00:26:10.277033 | orchestrator | ok: [testbed-manager] 2026-03-17 00:26:10.277045 | orchestrator | 2026-03-17 00:26:10.277056 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-03-17 00:26:10.277067 | orchestrator | Tuesday 17 March 2026 00:24:19 +0000 (0:00:02.258) 0:00:02.523 ********* 2026-03-17 00:26:10.277118 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-03-17 00:26:10.277129 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-03-17 00:26:10.277140 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-03-17 00:26:10.277152 | orchestrator | 2026-03-17 00:26:10.277163 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-03-17 00:26:10.277174 | orchestrator | Tuesday 17 March 2026 00:24:21 +0000 (0:00:01.299) 0:00:03.823 ********* 2026-03-17 00:26:10.277185 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-03-17 00:26:10.277196 | orchestrator | 2026-03-17 00:26:10.277207 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-03-17 00:26:10.277218 | orchestrator | Tuesday 17 March 2026 00:24:22 +0000 (0:00:01.033) 0:00:04.857 ********* 2026-03-17 00:26:10.277229 | orchestrator | ok: [testbed-manager] 2026-03-17 00:26:10.277240 | orchestrator | 2026-03-17 00:26:10.277251 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-03-17 00:26:10.277262 | orchestrator | Tuesday 17 March 2026 00:24:22 +0000 (0:00:00.327) 0:00:05.184 ********* 2026-03-17 00:26:10.277279 | orchestrator | changed: [testbed-manager] 2026-03-17 00:26:10.277300 | orchestrator | 2026-03-17 00:26:10.277320 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-03-17 00:26:10.277339 | orchestrator | Tuesday 17 March 2026 00:24:23 +0000 (0:00:00.904) 0:00:06.088 ********* 2026-03-17 00:26:10.277359 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-03-17 00:26:10.277382 | orchestrator | ok: [testbed-manager] 2026-03-17 00:26:10.277403 | orchestrator | 2026-03-17 00:26:10.277423 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-03-17 00:26:10.277442 | orchestrator | Tuesday 17 March 2026 00:24:57 +0000 (0:00:34.067) 0:00:40.156 ********* 2026-03-17 00:26:10.277463 | orchestrator | changed: [testbed-manager] 2026-03-17 00:26:10.277484 | orchestrator | 2026-03-17 00:26:10.277506 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-03-17 00:26:10.277527 | orchestrator | Tuesday 17 March 2026 00:25:09 +0000 (0:00:11.819) 0:00:51.976 ********* 2026-03-17 00:26:10.277548 | orchestrator | Pausing for 60 seconds 2026-03-17 00:26:10.277569 | orchestrator | changed: [testbed-manager] 2026-03-17 00:26:10.277588 | orchestrator | 2026-03-17 00:26:10.277608 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-03-17 00:26:10.277629 | orchestrator | Tuesday 17 March 2026 00:26:09 +0000 (0:01:00.083) 0:01:52.059 ********* 2026-03-17 00:26:10.277649 | orchestrator | ok: [testbed-manager] 2026-03-17 00:26:10.277670 | orchestrator | 2026-03-17 00:26:10.277689 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-03-17 00:26:10.277739 | orchestrator | Tuesday 17 March 2026 00:26:09 +0000 (0:00:00.074) 0:01:52.133 ********* 2026-03-17 00:26:10.277760 | orchestrator | changed: [testbed-manager] 2026-03-17 00:26:10.277776 | orchestrator | 2026-03-17 00:26:10.277794 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:26:10.277810 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:26:10.277826 | orchestrator | 2026-03-17 00:26:10.277843 | orchestrator | 2026-03-17 00:26:10.277862 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:26:10.277881 | orchestrator | Tuesday 17 March 2026 00:26:10 +0000 (0:00:00.594) 0:01:52.728 ********* 2026-03-17 00:26:10.277900 | orchestrator | =============================================================================== 2026-03-17 00:26:10.277918 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.08s 2026-03-17 00:26:10.277936 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 34.07s 2026-03-17 00:26:10.277954 | orchestrator | osism.services.squid : Restart squid service --------------------------- 11.82s 2026-03-17 00:26:10.277971 | orchestrator | osism.services.squid : Install required packages ------------------------ 2.26s 2026-03-17 00:26:10.277987 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.30s 2026-03-17 00:26:10.278004 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.03s 2026-03-17 00:26:10.278127 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.90s 2026-03-17 00:26:10.278150 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.59s 2026-03-17 00:26:10.278170 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.33s 2026-03-17 00:26:10.278189 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.08s 2026-03-17 00:26:10.278209 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.07s 2026-03-17 00:26:10.479129 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-03-17 00:26:10.479245 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla 2026-03-17 00:26:10.484247 | orchestrator | + set -e 2026-03-17 00:26:10.484310 | orchestrator | + NAMESPACE=kolla 2026-03-17 00:26:10.484326 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-03-17 00:26:10.487630 | orchestrator | ++ semver latest 9.0.0 2026-03-17 00:26:10.536384 | orchestrator | + [[ -1 -lt 0 ]] 2026-03-17 00:26:10.536508 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-03-17 00:26:10.536539 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-03-17 00:26:21.990420 | orchestrator | 2026-03-17 00:26:21 | INFO  | Prepare task for execution of operator. 2026-03-17 00:26:22.069946 | orchestrator | 2026-03-17 00:26:22 | INFO  | Task 05966134-c84e-49d4-a69e-6e942cd2a429 (operator) was prepared for execution. 2026-03-17 00:26:22.070122 | orchestrator | 2026-03-17 00:26:22 | INFO  | It takes a moment until task 05966134-c84e-49d4-a69e-6e942cd2a429 (operator) has been started and output is visible here. 2026-03-17 00:26:37.202495 | orchestrator | 2026-03-17 00:26:37.202602 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-03-17 00:26:37.202618 | orchestrator | 2026-03-17 00:26:37.202630 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-17 00:26:37.202642 | orchestrator | Tuesday 17 March 2026 00:26:25 +0000 (0:00:00.180) 0:00:00.180 ********* 2026-03-17 00:26:37.202653 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:26:37.202666 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:26:37.202677 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:26:37.202688 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:26:37.202698 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:26:37.202709 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:26:37.202724 | orchestrator | 2026-03-17 00:26:37.202735 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-03-17 00:26:37.202772 | orchestrator | Tuesday 17 March 2026 00:26:28 +0000 (0:00:03.473) 0:00:03.654 ********* 2026-03-17 00:26:37.202783 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:26:37.202794 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:26:37.202805 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:26:37.202816 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:26:37.202826 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:26:37.202837 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:26:37.202848 | orchestrator | 2026-03-17 00:26:37.202859 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-03-17 00:26:37.202869 | orchestrator | 2026-03-17 00:26:37.202880 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-03-17 00:26:37.202891 | orchestrator | Tuesday 17 March 2026 00:26:29 +0000 (0:00:00.845) 0:00:04.499 ********* 2026-03-17 00:26:37.202902 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:26:37.202913 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:26:37.202923 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:26:37.202934 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:26:37.202944 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:26:37.202955 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:26:37.202965 | orchestrator | 2026-03-17 00:26:37.202976 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-03-17 00:26:37.203003 | orchestrator | Tuesday 17 March 2026 00:26:29 +0000 (0:00:00.143) 0:00:04.642 ********* 2026-03-17 00:26:37.203014 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:26:37.203031 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:26:37.203044 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:26:37.203057 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:26:37.203093 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:26:37.203105 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:26:37.203117 | orchestrator | 2026-03-17 00:26:37.203129 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-03-17 00:26:37.203142 | orchestrator | Tuesday 17 March 2026 00:26:29 +0000 (0:00:00.153) 0:00:04.795 ********* 2026-03-17 00:26:37.203155 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:26:37.203168 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:26:37.203180 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:26:37.203192 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:26:37.203204 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:26:37.203217 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:26:37.203229 | orchestrator | 2026-03-17 00:26:37.203242 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-03-17 00:26:37.203254 | orchestrator | Tuesday 17 March 2026 00:26:30 +0000 (0:00:00.713) 0:00:05.509 ********* 2026-03-17 00:26:37.203266 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:26:37.203279 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:26:37.203291 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:26:37.203303 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:26:37.203315 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:26:37.203326 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:26:37.203338 | orchestrator | 2026-03-17 00:26:37.203351 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-03-17 00:26:37.203363 | orchestrator | Tuesday 17 March 2026 00:26:31 +0000 (0:00:00.894) 0:00:06.403 ********* 2026-03-17 00:26:37.203376 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-03-17 00:26:37.203389 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-03-17 00:26:37.203400 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-03-17 00:26:37.203410 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-03-17 00:26:37.203421 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-03-17 00:26:37.203432 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-03-17 00:26:37.203443 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-03-17 00:26:37.203454 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-03-17 00:26:37.203464 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-03-17 00:26:37.203485 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-03-17 00:26:37.203496 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-03-17 00:26:37.203507 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-03-17 00:26:37.203517 | orchestrator | 2026-03-17 00:26:37.203528 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-03-17 00:26:37.203540 | orchestrator | Tuesday 17 March 2026 00:26:32 +0000 (0:00:01.180) 0:00:07.583 ********* 2026-03-17 00:26:37.203550 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:26:37.203561 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:26:37.203572 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:26:37.203583 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:26:37.203594 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:26:37.203605 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:26:37.203615 | orchestrator | 2026-03-17 00:26:37.203626 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-03-17 00:26:37.203638 | orchestrator | Tuesday 17 March 2026 00:26:33 +0000 (0:00:01.359) 0:00:08.943 ********* 2026-03-17 00:26:37.203648 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-03-17 00:26:37.203660 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-03-17 00:26:37.203670 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-03-17 00:26:37.203681 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-03-17 00:26:37.203692 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-03-17 00:26:37.203721 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-03-17 00:26:37.203742 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-03-17 00:26:37.203761 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-03-17 00:26:37.203781 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-03-17 00:26:37.203799 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-03-17 00:26:37.203818 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-03-17 00:26:37.203833 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-03-17 00:26:37.203844 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-03-17 00:26:37.203855 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-03-17 00:26:37.203865 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-03-17 00:26:37.203876 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-03-17 00:26:37.203886 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-03-17 00:26:37.203897 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-03-17 00:26:37.203908 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-03-17 00:26:37.203918 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-03-17 00:26:37.203929 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-03-17 00:26:37.203939 | orchestrator | 2026-03-17 00:26:37.203950 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-03-17 00:26:37.203962 | orchestrator | Tuesday 17 March 2026 00:26:35 +0000 (0:00:01.268) 0:00:10.212 ********* 2026-03-17 00:26:37.203973 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:26:37.203984 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:26:37.203994 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:26:37.204005 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:26:37.204015 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:26:37.204026 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:26:37.204037 | orchestrator | 2026-03-17 00:26:37.204047 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-03-17 00:26:37.204099 | orchestrator | Tuesday 17 March 2026 00:26:35 +0000 (0:00:00.132) 0:00:10.344 ********* 2026-03-17 00:26:37.204111 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:26:37.204122 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:26:37.204133 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:26:37.204143 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:26:37.204154 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:26:37.204164 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:26:37.204175 | orchestrator | 2026-03-17 00:26:37.204186 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-03-17 00:26:37.204197 | orchestrator | Tuesday 17 March 2026 00:26:35 +0000 (0:00:00.163) 0:00:10.508 ********* 2026-03-17 00:26:37.204207 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:26:37.204218 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:26:37.204229 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:26:37.204239 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:26:37.204250 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:26:37.204260 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:26:37.204271 | orchestrator | 2026-03-17 00:26:37.204282 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-03-17 00:26:37.204292 | orchestrator | Tuesday 17 March 2026 00:26:36 +0000 (0:00:00.608) 0:00:11.116 ********* 2026-03-17 00:26:37.204303 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:26:37.204314 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:26:37.204324 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:26:37.204335 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:26:37.204346 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:26:37.204356 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:26:37.204367 | orchestrator | 2026-03-17 00:26:37.204378 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-03-17 00:26:37.204388 | orchestrator | Tuesday 17 March 2026 00:26:36 +0000 (0:00:00.148) 0:00:11.265 ********* 2026-03-17 00:26:37.204399 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-17 00:26:37.204410 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:26:37.204421 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-03-17 00:26:37.204431 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:26:37.204442 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-17 00:26:37.204453 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:26:37.204463 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-17 00:26:37.204474 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-17 00:26:37.204485 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:26:37.204495 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:26:37.204506 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-03-17 00:26:37.204517 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:26:37.204527 | orchestrator | 2026-03-17 00:26:37.204538 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-03-17 00:26:37.204549 | orchestrator | Tuesday 17 March 2026 00:26:36 +0000 (0:00:00.689) 0:00:11.955 ********* 2026-03-17 00:26:37.204559 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:26:37.204570 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:26:37.204581 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:26:37.204591 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:26:37.204602 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:26:37.204612 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:26:37.204623 | orchestrator | 2026-03-17 00:26:37.204634 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-03-17 00:26:37.204644 | orchestrator | Tuesday 17 March 2026 00:26:37 +0000 (0:00:00.148) 0:00:12.103 ********* 2026-03-17 00:26:37.204655 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:26:37.204666 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:26:37.204676 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:26:37.204687 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:26:37.204712 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:26:38.472901 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:26:38.473006 | orchestrator | 2026-03-17 00:26:38.473020 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-03-17 00:26:38.473031 | orchestrator | Tuesday 17 March 2026 00:26:37 +0000 (0:00:00.131) 0:00:12.234 ********* 2026-03-17 00:26:38.473040 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:26:38.473049 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:26:38.473058 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:26:38.473110 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:26:38.473119 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:26:38.473128 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:26:38.473137 | orchestrator | 2026-03-17 00:26:38.473145 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-03-17 00:26:38.473154 | orchestrator | Tuesday 17 March 2026 00:26:37 +0000 (0:00:00.130) 0:00:12.365 ********* 2026-03-17 00:26:38.473163 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:26:38.473172 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:26:38.473180 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:26:38.473189 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:26:38.473197 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:26:38.473206 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:26:38.473214 | orchestrator | 2026-03-17 00:26:38.473223 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-03-17 00:26:38.473232 | orchestrator | Tuesday 17 March 2026 00:26:38 +0000 (0:00:00.707) 0:00:13.072 ********* 2026-03-17 00:26:38.473240 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:26:38.473248 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:26:38.473257 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:26:38.473265 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:26:38.473274 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:26:38.473282 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:26:38.473291 | orchestrator | 2026-03-17 00:26:38.473299 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:26:38.473333 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-17 00:26:38.473344 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-17 00:26:38.473353 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-17 00:26:38.473361 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-17 00:26:38.473370 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-17 00:26:38.473379 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-17 00:26:38.473387 | orchestrator | 2026-03-17 00:26:38.473396 | orchestrator | 2026-03-17 00:26:38.473404 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:26:38.473413 | orchestrator | Tuesday 17 March 2026 00:26:38 +0000 (0:00:00.230) 0:00:13.302 ********* 2026-03-17 00:26:38.473422 | orchestrator | =============================================================================== 2026-03-17 00:26:38.473431 | orchestrator | Gathering Facts --------------------------------------------------------- 3.47s 2026-03-17 00:26:38.473439 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.36s 2026-03-17 00:26:38.473448 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.27s 2026-03-17 00:26:38.473479 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.18s 2026-03-17 00:26:38.473488 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.89s 2026-03-17 00:26:38.473498 | orchestrator | Do not require tty for all users ---------------------------------------- 0.85s 2026-03-17 00:26:38.473508 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.71s 2026-03-17 00:26:38.473517 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.71s 2026-03-17 00:26:38.473527 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.69s 2026-03-17 00:26:38.473537 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.61s 2026-03-17 00:26:38.473547 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.23s 2026-03-17 00:26:38.473557 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.16s 2026-03-17 00:26:38.473567 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.15s 2026-03-17 00:26:38.473576 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.15s 2026-03-17 00:26:38.473586 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.15s 2026-03-17 00:26:38.473595 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.14s 2026-03-17 00:26:38.473605 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.13s 2026-03-17 00:26:38.473615 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.13s 2026-03-17 00:26:38.473624 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.13s 2026-03-17 00:26:38.646713 | orchestrator | + osism apply --environment custom facts 2026-03-17 00:26:39.891365 | orchestrator | 2026-03-17 00:26:39 | INFO  | Trying to run play facts in environment custom 2026-03-17 00:26:49.958951 | orchestrator | 2026-03-17 00:26:49 | INFO  | Prepare task for execution of facts. 2026-03-17 00:26:50.025964 | orchestrator | 2026-03-17 00:26:50 | INFO  | Task 0dfdf436-3ff4-4cc1-93b9-5bd82c90ddc3 (facts) was prepared for execution. 2026-03-17 00:26:50.026202 | orchestrator | 2026-03-17 00:26:50 | INFO  | It takes a moment until task 0dfdf436-3ff4-4cc1-93b9-5bd82c90ddc3 (facts) has been started and output is visible here. 2026-03-17 00:27:34.382722 | orchestrator | 2026-03-17 00:27:34.382866 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-03-17 00:27:34.383705 | orchestrator | 2026-03-17 00:27:34.383735 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-17 00:27:34.383747 | orchestrator | Tuesday 17 March 2026 00:26:52 +0000 (0:00:00.105) 0:00:00.105 ********* 2026-03-17 00:27:34.383759 | orchestrator | ok: [testbed-manager] 2026-03-17 00:27:34.383771 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:27:34.383783 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:27:34.383794 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:27:34.383805 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:27:34.383815 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:27:34.383826 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:27:34.383837 | orchestrator | 2026-03-17 00:27:34.383848 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-03-17 00:27:34.383859 | orchestrator | Tuesday 17 March 2026 00:26:54 +0000 (0:00:01.378) 0:00:01.484 ********* 2026-03-17 00:27:34.383870 | orchestrator | ok: [testbed-manager] 2026-03-17 00:27:34.383881 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:27:34.383891 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:27:34.383902 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:27:34.383914 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:27:34.383925 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:27:34.383953 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:27:34.383964 | orchestrator | 2026-03-17 00:27:34.384000 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-03-17 00:27:34.384012 | orchestrator | 2026-03-17 00:27:34.384023 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-17 00:27:34.384033 | orchestrator | Tuesday 17 March 2026 00:26:55 +0000 (0:00:01.180) 0:00:02.664 ********* 2026-03-17 00:27:34.384063 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:27:34.384075 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:27:34.384085 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:27:34.384096 | orchestrator | 2026-03-17 00:27:34.384107 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-17 00:27:34.384119 | orchestrator | Tuesday 17 March 2026 00:26:55 +0000 (0:00:00.104) 0:00:02.769 ********* 2026-03-17 00:27:34.384130 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:27:34.384141 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:27:34.384151 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:27:34.384162 | orchestrator | 2026-03-17 00:27:34.384173 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-17 00:27:34.384183 | orchestrator | Tuesday 17 March 2026 00:26:55 +0000 (0:00:00.189) 0:00:02.958 ********* 2026-03-17 00:27:34.384194 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:27:34.384205 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:27:34.384215 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:27:34.384226 | orchestrator | 2026-03-17 00:27:34.384237 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-17 00:27:34.384247 | orchestrator | Tuesday 17 March 2026 00:26:55 +0000 (0:00:00.200) 0:00:03.158 ********* 2026-03-17 00:27:34.384259 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:27:34.384271 | orchestrator | 2026-03-17 00:27:34.384282 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-17 00:27:34.384293 | orchestrator | Tuesday 17 March 2026 00:26:55 +0000 (0:00:00.120) 0:00:03.278 ********* 2026-03-17 00:27:34.384304 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:27:34.384315 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:27:34.384325 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:27:34.384336 | orchestrator | 2026-03-17 00:27:34.384347 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-17 00:27:34.384358 | orchestrator | Tuesday 17 March 2026 00:26:56 +0000 (0:00:00.420) 0:00:03.699 ********* 2026-03-17 00:27:34.384368 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:27:34.384382 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:27:34.384402 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:27:34.384421 | orchestrator | 2026-03-17 00:27:34.384440 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-17 00:27:34.384458 | orchestrator | Tuesday 17 March 2026 00:26:56 +0000 (0:00:00.108) 0:00:03.808 ********* 2026-03-17 00:27:34.384476 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:27:34.384497 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:27:34.384514 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:27:34.384532 | orchestrator | 2026-03-17 00:27:34.384551 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-17 00:27:34.384571 | orchestrator | Tuesday 17 March 2026 00:26:57 +0000 (0:00:01.031) 0:00:04.840 ********* 2026-03-17 00:27:34.384590 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:27:34.384609 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:27:34.384627 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:27:34.384647 | orchestrator | 2026-03-17 00:27:34.384668 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-17 00:27:34.384688 | orchestrator | Tuesday 17 March 2026 00:26:57 +0000 (0:00:00.454) 0:00:05.294 ********* 2026-03-17 00:27:34.384708 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:27:34.384727 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:27:34.384747 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:27:34.384766 | orchestrator | 2026-03-17 00:27:34.384801 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-17 00:27:34.384821 | orchestrator | Tuesday 17 March 2026 00:26:59 +0000 (0:00:01.104) 0:00:06.399 ********* 2026-03-17 00:27:34.384841 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:27:34.384859 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:27:34.384879 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:27:34.384896 | orchestrator | 2026-03-17 00:27:34.384907 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-03-17 00:27:34.384917 | orchestrator | Tuesday 17 March 2026 00:27:15 +0000 (0:00:16.430) 0:00:22.830 ********* 2026-03-17 00:27:34.384928 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:27:34.384939 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:27:34.384949 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:27:34.384960 | orchestrator | 2026-03-17 00:27:34.384971 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-03-17 00:27:34.385003 | orchestrator | Tuesday 17 March 2026 00:27:15 +0000 (0:00:00.098) 0:00:22.928 ********* 2026-03-17 00:27:34.385015 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:27:34.385025 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:27:34.385036 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:27:34.385116 | orchestrator | 2026-03-17 00:27:34.385129 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-17 00:27:34.385140 | orchestrator | Tuesday 17 March 2026 00:27:23 +0000 (0:00:08.249) 0:00:31.178 ********* 2026-03-17 00:27:34.385151 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:27:34.385161 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:27:34.385172 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:27:34.385183 | orchestrator | 2026-03-17 00:27:34.385194 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-03-17 00:27:34.385204 | orchestrator | Tuesday 17 March 2026 00:27:24 +0000 (0:00:00.489) 0:00:31.667 ********* 2026-03-17 00:27:34.385215 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-03-17 00:27:34.385227 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-03-17 00:27:34.385237 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-03-17 00:27:34.385248 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-03-17 00:27:34.385259 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-03-17 00:27:34.385269 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-03-17 00:27:34.385283 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-03-17 00:27:34.385302 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-03-17 00:27:34.385320 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-03-17 00:27:34.385337 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-03-17 00:27:34.385356 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-03-17 00:27:34.385374 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-03-17 00:27:34.385393 | orchestrator | 2026-03-17 00:27:34.385412 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-17 00:27:34.385430 | orchestrator | Tuesday 17 March 2026 00:27:28 +0000 (0:00:03.721) 0:00:35.389 ********* 2026-03-17 00:27:34.385448 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:27:34.385467 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:27:34.385485 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:27:34.385501 | orchestrator | 2026-03-17 00:27:34.385512 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-17 00:27:34.385522 | orchestrator | 2026-03-17 00:27:34.385533 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-17 00:27:34.385587 | orchestrator | Tuesday 17 March 2026 00:27:29 +0000 (0:00:01.498) 0:00:36.888 ********* 2026-03-17 00:27:34.385599 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:27:34.385620 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:27:34.385639 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:27:34.385656 | orchestrator | ok: [testbed-manager] 2026-03-17 00:27:34.385674 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:27:34.385692 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:27:34.385711 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:27:34.385729 | orchestrator | 2026-03-17 00:27:34.385748 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:27:34.385767 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:27:34.385786 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:27:34.385805 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:27:34.385823 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:27:34.385842 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:27:34.385860 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:27:34.385879 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:27:34.385896 | orchestrator | 2026-03-17 00:27:34.385908 | orchestrator | 2026-03-17 00:27:34.385919 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:27:34.385930 | orchestrator | Tuesday 17 March 2026 00:27:34 +0000 (0:00:04.780) 0:00:41.668 ********* 2026-03-17 00:27:34.385940 | orchestrator | =============================================================================== 2026-03-17 00:27:34.385951 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.43s 2026-03-17 00:27:34.385962 | orchestrator | Install required packages (Debian) -------------------------------------- 8.25s 2026-03-17 00:27:34.385972 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.78s 2026-03-17 00:27:34.385987 | orchestrator | Copy fact files --------------------------------------------------------- 3.72s 2026-03-17 00:27:34.386004 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.50s 2026-03-17 00:27:34.386118 | orchestrator | Create custom facts directory ------------------------------------------- 1.38s 2026-03-17 00:27:34.386163 | orchestrator | Copy fact file ---------------------------------------------------------- 1.18s 2026-03-17 00:27:34.545953 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.10s 2026-03-17 00:27:34.546132 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.03s 2026-03-17 00:27:34.546153 | orchestrator | Create custom facts directory ------------------------------------------- 0.49s 2026-03-17 00:27:34.546161 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.45s 2026-03-17 00:27:34.546168 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.42s 2026-03-17 00:27:34.546175 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.20s 2026-03-17 00:27:34.546182 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.19s 2026-03-17 00:27:34.546189 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.12s 2026-03-17 00:27:34.546197 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.11s 2026-03-17 00:27:34.546203 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.10s 2026-03-17 00:27:34.546234 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.10s 2026-03-17 00:27:34.720446 | orchestrator | + osism apply bootstrap 2026-03-17 00:27:46.005526 | orchestrator | 2026-03-17 00:27:46 | INFO  | Prepare task for execution of bootstrap. 2026-03-17 00:27:46.078692 | orchestrator | 2026-03-17 00:27:46 | INFO  | Task 7fb56e0b-5e77-442a-9b76-b2045ffd3a98 (bootstrap) was prepared for execution. 2026-03-17 00:27:46.078784 | orchestrator | 2026-03-17 00:27:46 | INFO  | It takes a moment until task 7fb56e0b-5e77-442a-9b76-b2045ffd3a98 (bootstrap) has been started and output is visible here. 2026-03-17 00:28:01.169813 | orchestrator | 2026-03-17 00:28:01.169956 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-03-17 00:28:01.169986 | orchestrator | 2026-03-17 00:28:01.170005 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-03-17 00:28:01.170166 | orchestrator | Tuesday 17 March 2026 00:27:49 +0000 (0:00:00.169) 0:00:00.169 ********* 2026-03-17 00:28:01.170233 | orchestrator | ok: [testbed-manager] 2026-03-17 00:28:01.170255 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:28:01.170267 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:28:01.170278 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:28:01.170289 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:28:01.170300 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:28:01.170310 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:28:01.170321 | orchestrator | 2026-03-17 00:28:01.170332 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-17 00:28:01.170344 | orchestrator | 2026-03-17 00:28:01.170357 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-17 00:28:01.170370 | orchestrator | Tuesday 17 March 2026 00:27:49 +0000 (0:00:00.255) 0:00:00.424 ********* 2026-03-17 00:28:01.170382 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:28:01.170396 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:28:01.170408 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:28:01.170421 | orchestrator | ok: [testbed-manager] 2026-03-17 00:28:01.170433 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:28:01.170446 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:28:01.170459 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:28:01.170472 | orchestrator | 2026-03-17 00:28:01.170485 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-03-17 00:28:01.170497 | orchestrator | 2026-03-17 00:28:01.170509 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-17 00:28:01.170522 | orchestrator | Tuesday 17 March 2026 00:27:54 +0000 (0:00:04.747) 0:00:05.171 ********* 2026-03-17 00:28:01.170535 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-03-17 00:28:01.170549 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-17 00:28:01.170562 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-03-17 00:28:01.170574 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-03-17 00:28:01.170587 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-03-17 00:28:01.170599 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-17 00:28:01.170611 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-03-17 00:28:01.170624 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-17 00:28:01.170636 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-03-17 00:28:01.170650 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-17 00:28:01.170670 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-17 00:28:01.170687 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-17 00:28:01.170706 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-03-17 00:28:01.170724 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-17 00:28:01.170740 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-03-17 00:28:01.170757 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-17 00:28:01.170808 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-17 00:28:01.170827 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-17 00:28:01.170844 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-03-17 00:28:01.170862 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-03-17 00:28:01.170878 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:28:01.170897 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-17 00:28:01.170916 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-17 00:28:01.170933 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-17 00:28:01.170948 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:28:01.170964 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-17 00:28:01.170980 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-17 00:28:01.170997 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-17 00:28:01.171013 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-17 00:28:01.171030 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-17 00:28:01.171079 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-03-17 00:28:01.171097 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-17 00:28:01.171114 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-03-17 00:28:01.171131 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-17 00:28:01.171150 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-17 00:28:01.171168 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-17 00:28:01.171186 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:28:01.171201 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-17 00:28:01.171212 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-17 00:28:01.171223 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-17 00:28:01.171233 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-17 00:28:01.171244 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:28:01.171255 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-17 00:28:01.171266 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:28:01.171277 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-17 00:28:01.171288 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-17 00:28:01.171299 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-17 00:28:01.171334 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-17 00:28:01.171345 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-17 00:28:01.171356 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-17 00:28:01.171367 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-17 00:28:01.171378 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-17 00:28:01.171388 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:28:01.171399 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-17 00:28:01.171410 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-17 00:28:01.171421 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:28:01.171432 | orchestrator | 2026-03-17 00:28:01.171443 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-03-17 00:28:01.171453 | orchestrator | 2026-03-17 00:28:01.171464 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-03-17 00:28:01.171475 | orchestrator | Tuesday 17 March 2026 00:27:54 +0000 (0:00:00.395) 0:00:05.566 ********* 2026-03-17 00:28:01.171486 | orchestrator | ok: [testbed-manager] 2026-03-17 00:28:01.171497 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:28:01.171522 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:28:01.171533 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:28:01.171544 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:28:01.171554 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:28:01.171565 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:28:01.171576 | orchestrator | 2026-03-17 00:28:01.171587 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-03-17 00:28:01.171597 | orchestrator | Tuesday 17 March 2026 00:27:55 +0000 (0:00:01.156) 0:00:06.723 ********* 2026-03-17 00:28:01.171608 | orchestrator | ok: [testbed-manager] 2026-03-17 00:28:01.171619 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:28:01.171630 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:28:01.171640 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:28:01.171651 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:28:01.171661 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:28:01.171672 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:28:01.171682 | orchestrator | 2026-03-17 00:28:01.171693 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-03-17 00:28:01.171704 | orchestrator | Tuesday 17 March 2026 00:27:56 +0000 (0:00:01.270) 0:00:07.994 ********* 2026-03-17 00:28:01.171716 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:28:01.171729 | orchestrator | 2026-03-17 00:28:01.171740 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-03-17 00:28:01.171751 | orchestrator | Tuesday 17 March 2026 00:27:57 +0000 (0:00:00.289) 0:00:08.283 ********* 2026-03-17 00:28:01.171762 | orchestrator | changed: [testbed-manager] 2026-03-17 00:28:01.171773 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:28:01.171783 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:28:01.171794 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:28:01.171805 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:28:01.171815 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:28:01.171826 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:28:01.171836 | orchestrator | 2026-03-17 00:28:01.171847 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-03-17 00:28:01.171858 | orchestrator | Tuesday 17 March 2026 00:27:58 +0000 (0:00:01.463) 0:00:09.747 ********* 2026-03-17 00:28:01.171869 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:28:01.171881 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:28:01.171947 | orchestrator | 2026-03-17 00:28:01.171959 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-03-17 00:28:01.171990 | orchestrator | Tuesday 17 March 2026 00:27:58 +0000 (0:00:00.260) 0:00:10.008 ********* 2026-03-17 00:28:01.172001 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:28:01.172013 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:28:01.172024 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:28:01.172035 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:28:01.172066 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:28:01.172077 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:28:01.172087 | orchestrator | 2026-03-17 00:28:01.172103 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-03-17 00:28:01.172124 | orchestrator | Tuesday 17 March 2026 00:27:59 +0000 (0:00:01.032) 0:00:11.040 ********* 2026-03-17 00:28:01.172141 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:28:01.172158 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:28:01.172178 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:28:01.172198 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:28:01.172218 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:28:01.172239 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:28:01.172262 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:28:01.172273 | orchestrator | 2026-03-17 00:28:01.172284 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-03-17 00:28:01.172300 | orchestrator | Tuesday 17 March 2026 00:28:00 +0000 (0:00:00.670) 0:00:11.710 ********* 2026-03-17 00:28:01.172311 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:28:01.172322 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:28:01.172333 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:28:01.172343 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:28:01.172354 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:28:01.172364 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:28:01.172375 | orchestrator | ok: [testbed-manager] 2026-03-17 00:28:01.172386 | orchestrator | 2026-03-17 00:28:01.172397 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-03-17 00:28:01.172408 | orchestrator | Tuesday 17 March 2026 00:28:01 +0000 (0:00:00.404) 0:00:12.115 ********* 2026-03-17 00:28:01.172419 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:28:01.172430 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:28:01.172451 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:28:12.964434 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:28:12.964558 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:28:12.964575 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:28:12.964586 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:28:12.964598 | orchestrator | 2026-03-17 00:28:12.964610 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-03-17 00:28:12.964623 | orchestrator | Tuesday 17 March 2026 00:28:01 +0000 (0:00:00.214) 0:00:12.330 ********* 2026-03-17 00:28:12.964636 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:28:12.964665 | orchestrator | 2026-03-17 00:28:12.964676 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-03-17 00:28:12.964689 | orchestrator | Tuesday 17 March 2026 00:28:01 +0000 (0:00:00.310) 0:00:12.640 ********* 2026-03-17 00:28:12.964700 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:28:12.964711 | orchestrator | 2026-03-17 00:28:12.964722 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-03-17 00:28:12.964732 | orchestrator | Tuesday 17 March 2026 00:28:01 +0000 (0:00:00.326) 0:00:12.967 ********* 2026-03-17 00:28:12.964743 | orchestrator | ok: [testbed-manager] 2026-03-17 00:28:12.964755 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:28:12.964766 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:28:12.964776 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:28:12.964787 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:28:12.964797 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:28:12.964808 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:28:12.964818 | orchestrator | 2026-03-17 00:28:12.964829 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-03-17 00:28:12.964840 | orchestrator | Tuesday 17 March 2026 00:28:03 +0000 (0:00:01.359) 0:00:14.326 ********* 2026-03-17 00:28:12.964852 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:28:12.964863 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:28:12.964874 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:28:12.964884 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:28:12.964895 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:28:12.964906 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:28:12.964916 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:28:12.964928 | orchestrator | 2026-03-17 00:28:12.964948 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-03-17 00:28:12.965000 | orchestrator | Tuesday 17 March 2026 00:28:03 +0000 (0:00:00.198) 0:00:14.525 ********* 2026-03-17 00:28:12.965021 | orchestrator | ok: [testbed-manager] 2026-03-17 00:28:12.965064 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:28:12.965082 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:28:12.965101 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:28:12.965120 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:28:12.965137 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:28:12.965154 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:28:12.965173 | orchestrator | 2026-03-17 00:28:12.965192 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-03-17 00:28:12.965212 | orchestrator | Tuesday 17 March 2026 00:28:04 +0000 (0:00:00.556) 0:00:15.082 ********* 2026-03-17 00:28:12.965231 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:28:12.965250 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:28:12.965263 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:28:12.965273 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:28:12.965284 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:28:12.965295 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:28:12.965305 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:28:12.965316 | orchestrator | 2026-03-17 00:28:12.965327 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-03-17 00:28:12.965339 | orchestrator | Tuesday 17 March 2026 00:28:04 +0000 (0:00:00.230) 0:00:15.312 ********* 2026-03-17 00:28:12.965350 | orchestrator | ok: [testbed-manager] 2026-03-17 00:28:12.965361 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:28:12.965371 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:28:12.965382 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:28:12.965393 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:28:12.965403 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:28:12.965414 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:28:12.965425 | orchestrator | 2026-03-17 00:28:12.965436 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-03-17 00:28:12.965459 | orchestrator | Tuesday 17 March 2026 00:28:04 +0000 (0:00:00.534) 0:00:15.847 ********* 2026-03-17 00:28:12.965470 | orchestrator | ok: [testbed-manager] 2026-03-17 00:28:12.965481 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:28:12.965491 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:28:12.965502 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:28:12.965513 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:28:12.965523 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:28:12.965534 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:28:12.965544 | orchestrator | 2026-03-17 00:28:12.965565 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-03-17 00:28:12.965576 | orchestrator | Tuesday 17 March 2026 00:28:05 +0000 (0:00:01.121) 0:00:16.968 ********* 2026-03-17 00:28:12.965587 | orchestrator | ok: [testbed-manager] 2026-03-17 00:28:12.965598 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:28:12.965609 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:28:12.965619 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:28:12.965630 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:28:12.965641 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:28:12.965651 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:28:12.965662 | orchestrator | 2026-03-17 00:28:12.965673 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-03-17 00:28:12.965684 | orchestrator | Tuesday 17 March 2026 00:28:06 +0000 (0:00:01.015) 0:00:17.984 ********* 2026-03-17 00:28:12.965715 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:28:12.965727 | orchestrator | 2026-03-17 00:28:12.965738 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-03-17 00:28:12.965749 | orchestrator | Tuesday 17 March 2026 00:28:07 +0000 (0:00:00.310) 0:00:18.295 ********* 2026-03-17 00:28:12.965771 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:28:12.965782 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:28:12.965792 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:28:12.965803 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:28:12.965814 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:28:12.965825 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:28:12.965835 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:28:12.965846 | orchestrator | 2026-03-17 00:28:12.965856 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-17 00:28:12.965867 | orchestrator | Tuesday 17 March 2026 00:28:08 +0000 (0:00:01.239) 0:00:19.534 ********* 2026-03-17 00:28:12.965878 | orchestrator | ok: [testbed-manager] 2026-03-17 00:28:12.965888 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:28:12.965899 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:28:12.965910 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:28:12.965920 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:28:12.965931 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:28:12.965941 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:28:12.965952 | orchestrator | 2026-03-17 00:28:12.965964 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-17 00:28:12.965984 | orchestrator | Tuesday 17 March 2026 00:28:08 +0000 (0:00:00.211) 0:00:19.746 ********* 2026-03-17 00:28:12.966004 | orchestrator | ok: [testbed-manager] 2026-03-17 00:28:12.966126 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:28:12.966148 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:28:12.966169 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:28:12.966188 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:28:12.966208 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:28:12.966221 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:28:12.966232 | orchestrator | 2026-03-17 00:28:12.966242 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-17 00:28:12.966253 | orchestrator | Tuesday 17 March 2026 00:28:08 +0000 (0:00:00.215) 0:00:19.962 ********* 2026-03-17 00:28:12.966264 | orchestrator | ok: [testbed-manager] 2026-03-17 00:28:12.966275 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:28:12.966286 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:28:12.966296 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:28:12.966307 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:28:12.966317 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:28:12.966328 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:28:12.966339 | orchestrator | 2026-03-17 00:28:12.966350 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-17 00:28:12.966361 | orchestrator | Tuesday 17 March 2026 00:28:09 +0000 (0:00:00.215) 0:00:20.177 ********* 2026-03-17 00:28:12.966373 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:28:12.966386 | orchestrator | 2026-03-17 00:28:12.966397 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-17 00:28:12.966407 | orchestrator | Tuesday 17 March 2026 00:28:09 +0000 (0:00:00.282) 0:00:20.460 ********* 2026-03-17 00:28:12.966418 | orchestrator | ok: [testbed-manager] 2026-03-17 00:28:12.966429 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:28:12.966440 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:28:12.966451 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:28:12.966461 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:28:12.966472 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:28:12.966483 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:28:12.966494 | orchestrator | 2026-03-17 00:28:12.966505 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-17 00:28:12.966516 | orchestrator | Tuesday 17 March 2026 00:28:09 +0000 (0:00:00.530) 0:00:20.991 ********* 2026-03-17 00:28:12.966527 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:28:12.966538 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:28:12.966559 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:28:12.966571 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:28:12.966581 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:28:12.966592 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:28:12.966603 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:28:12.966614 | orchestrator | 2026-03-17 00:28:12.966625 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-17 00:28:12.966637 | orchestrator | Tuesday 17 March 2026 00:28:10 +0000 (0:00:00.210) 0:00:21.201 ********* 2026-03-17 00:28:12.966648 | orchestrator | ok: [testbed-manager] 2026-03-17 00:28:12.966659 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:28:12.966670 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:28:12.966680 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:28:12.966691 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:28:12.966702 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:28:12.966713 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:28:12.966724 | orchestrator | 2026-03-17 00:28:12.966735 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-17 00:28:12.966747 | orchestrator | Tuesday 17 March 2026 00:28:11 +0000 (0:00:01.175) 0:00:22.377 ********* 2026-03-17 00:28:12.966757 | orchestrator | ok: [testbed-manager] 2026-03-17 00:28:12.966768 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:28:12.966779 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:28:12.966789 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:28:12.966800 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:28:12.966811 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:28:12.966822 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:28:12.966834 | orchestrator | 2026-03-17 00:28:12.966853 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-17 00:28:12.966870 | orchestrator | Tuesday 17 March 2026 00:28:11 +0000 (0:00:00.637) 0:00:23.014 ********* 2026-03-17 00:28:12.966887 | orchestrator | ok: [testbed-manager] 2026-03-17 00:28:12.966905 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:28:12.966923 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:28:12.966941 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:28:12.966974 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:28:53.416892 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:28:53.417080 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:28:53.417110 | orchestrator | 2026-03-17 00:28:53.417133 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-17 00:28:53.417154 | orchestrator | Tuesday 17 March 2026 00:28:13 +0000 (0:00:01.139) 0:00:24.153 ********* 2026-03-17 00:28:53.417174 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:28:53.417194 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:28:53.417215 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:28:53.417234 | orchestrator | changed: [testbed-manager] 2026-03-17 00:28:53.417253 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:28:53.417272 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:28:53.417293 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:28:53.417313 | orchestrator | 2026-03-17 00:28:53.417334 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-03-17 00:28:53.417355 | orchestrator | Tuesday 17 March 2026 00:28:30 +0000 (0:00:16.983) 0:00:41.137 ********* 2026-03-17 00:28:53.417378 | orchestrator | ok: [testbed-manager] 2026-03-17 00:28:53.417400 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:28:53.417423 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:28:53.417446 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:28:53.417464 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:28:53.417482 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:28:53.417498 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:28:53.417515 | orchestrator | 2026-03-17 00:28:53.417531 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-03-17 00:28:53.417548 | orchestrator | Tuesday 17 March 2026 00:28:30 +0000 (0:00:00.215) 0:00:41.352 ********* 2026-03-17 00:28:53.417563 | orchestrator | ok: [testbed-manager] 2026-03-17 00:28:53.417613 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:28:53.417634 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:28:53.417670 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:28:53.417700 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:28:53.417716 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:28:53.417731 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:28:53.417746 | orchestrator | 2026-03-17 00:28:53.417762 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-03-17 00:28:53.417777 | orchestrator | Tuesday 17 March 2026 00:28:30 +0000 (0:00:00.199) 0:00:41.551 ********* 2026-03-17 00:28:53.417793 | orchestrator | ok: [testbed-manager] 2026-03-17 00:28:53.417806 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:28:53.417821 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:28:53.417836 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:28:53.417850 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:28:53.417865 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:28:53.417878 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:28:53.417892 | orchestrator | 2026-03-17 00:28:53.417907 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-03-17 00:28:53.417921 | orchestrator | Tuesday 17 March 2026 00:28:30 +0000 (0:00:00.221) 0:00:41.773 ********* 2026-03-17 00:28:53.417937 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:28:53.417955 | orchestrator | 2026-03-17 00:28:53.418002 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-03-17 00:28:53.418125 | orchestrator | Tuesday 17 March 2026 00:28:30 +0000 (0:00:00.267) 0:00:42.040 ********* 2026-03-17 00:28:53.418148 | orchestrator | ok: [testbed-manager] 2026-03-17 00:28:53.418165 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:28:53.418181 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:28:53.418198 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:28:53.418216 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:28:53.418234 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:28:53.418251 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:28:53.418270 | orchestrator | 2026-03-17 00:28:53.418288 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-03-17 00:28:53.418306 | orchestrator | Tuesday 17 March 2026 00:28:32 +0000 (0:00:01.979) 0:00:44.020 ********* 2026-03-17 00:28:53.418325 | orchestrator | changed: [testbed-manager] 2026-03-17 00:28:53.418345 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:28:53.418363 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:28:53.418382 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:28:53.418401 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:28:53.418420 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:28:53.418439 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:28:53.418458 | orchestrator | 2026-03-17 00:28:53.418477 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-03-17 00:28:53.418496 | orchestrator | Tuesday 17 March 2026 00:28:34 +0000 (0:00:01.078) 0:00:45.099 ********* 2026-03-17 00:28:53.418516 | orchestrator | ok: [testbed-manager] 2026-03-17 00:28:53.418535 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:28:53.418553 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:28:53.418572 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:28:53.418591 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:28:53.418610 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:28:53.418628 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:28:53.418646 | orchestrator | 2026-03-17 00:28:53.418666 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-03-17 00:28:53.418684 | orchestrator | Tuesday 17 March 2026 00:28:35 +0000 (0:00:00.969) 0:00:46.068 ********* 2026-03-17 00:28:53.418714 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:28:53.418752 | orchestrator | 2026-03-17 00:28:53.418770 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-03-17 00:28:53.418790 | orchestrator | Tuesday 17 March 2026 00:28:35 +0000 (0:00:00.269) 0:00:46.338 ********* 2026-03-17 00:28:53.418809 | orchestrator | changed: [testbed-manager] 2026-03-17 00:28:53.418828 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:28:53.418847 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:28:53.418866 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:28:53.418884 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:28:53.418903 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:28:53.418921 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:28:53.418940 | orchestrator | 2026-03-17 00:28:53.418987 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-03-17 00:28:53.419006 | orchestrator | Tuesday 17 March 2026 00:28:36 +0000 (0:00:01.052) 0:00:47.390 ********* 2026-03-17 00:28:53.419096 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:28:53.419115 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:28:53.419133 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:28:53.419151 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:28:53.419168 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:28:53.419185 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:28:53.419203 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:28:53.419220 | orchestrator | 2026-03-17 00:28:53.419238 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-03-17 00:28:53.419255 | orchestrator | Tuesday 17 March 2026 00:28:36 +0000 (0:00:00.204) 0:00:47.595 ********* 2026-03-17 00:28:53.419273 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:28:53.419289 | orchestrator | 2026-03-17 00:28:53.419305 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-03-17 00:28:53.419320 | orchestrator | Tuesday 17 March 2026 00:28:36 +0000 (0:00:00.269) 0:00:47.864 ********* 2026-03-17 00:28:53.419337 | orchestrator | ok: [testbed-manager] 2026-03-17 00:28:53.419352 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:28:53.419366 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:28:53.419380 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:28:53.419394 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:28:53.419408 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:28:53.419422 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:28:53.419436 | orchestrator | 2026-03-17 00:28:53.419450 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-03-17 00:28:53.419464 | orchestrator | Tuesday 17 March 2026 00:28:38 +0000 (0:00:01.814) 0:00:49.678 ********* 2026-03-17 00:28:53.419478 | orchestrator | changed: [testbed-manager] 2026-03-17 00:28:53.419492 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:28:53.419506 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:28:53.419516 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:28:53.419528 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:28:53.419539 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:28:53.419551 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:28:53.419562 | orchestrator | 2026-03-17 00:28:53.419573 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-03-17 00:28:53.419585 | orchestrator | Tuesday 17 March 2026 00:28:39 +0000 (0:00:01.241) 0:00:50.920 ********* 2026-03-17 00:28:53.419597 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:28:53.419608 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:28:53.419619 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:28:53.419631 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:28:53.419642 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:28:53.419653 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:28:53.419676 | orchestrator | changed: [testbed-manager] 2026-03-17 00:28:53.419689 | orchestrator | 2026-03-17 00:28:53.419701 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-03-17 00:28:53.419714 | orchestrator | Tuesday 17 March 2026 00:28:50 +0000 (0:00:10.730) 0:01:01.650 ********* 2026-03-17 00:28:53.419727 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:28:53.419739 | orchestrator | ok: [testbed-manager] 2026-03-17 00:28:53.419750 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:28:53.419761 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:28:53.419772 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:28:53.419784 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:28:53.419796 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:28:53.419809 | orchestrator | 2026-03-17 00:28:53.419823 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-03-17 00:28:53.419836 | orchestrator | Tuesday 17 March 2026 00:28:51 +0000 (0:00:01.075) 0:01:02.726 ********* 2026-03-17 00:28:53.419849 | orchestrator | ok: [testbed-manager] 2026-03-17 00:28:53.419862 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:28:53.419875 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:28:53.419889 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:28:53.419902 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:28:53.419915 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:28:53.419926 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:28:53.419939 | orchestrator | 2026-03-17 00:28:53.419952 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-03-17 00:28:53.419965 | orchestrator | Tuesday 17 March 2026 00:28:52 +0000 (0:00:01.060) 0:01:03.787 ********* 2026-03-17 00:28:53.419979 | orchestrator | ok: [testbed-manager] 2026-03-17 00:28:53.419993 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:28:53.420007 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:28:53.420045 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:28:53.420060 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:28:53.420074 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:28:53.420086 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:28:53.420099 | orchestrator | 2026-03-17 00:28:53.420113 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-03-17 00:28:53.420128 | orchestrator | Tuesday 17 March 2026 00:28:52 +0000 (0:00:00.159) 0:01:03.946 ********* 2026-03-17 00:28:53.420142 | orchestrator | ok: [testbed-manager] 2026-03-17 00:28:53.420155 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:28:53.420170 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:28:53.420183 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:28:53.420205 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:28:53.420220 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:28:53.420234 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:28:53.420248 | orchestrator | 2026-03-17 00:28:53.420263 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-03-17 00:28:53.420277 | orchestrator | Tuesday 17 March 2026 00:28:53 +0000 (0:00:00.206) 0:01:04.153 ********* 2026-03-17 00:28:53.420293 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:28:53.420309 | orchestrator | 2026-03-17 00:28:53.420337 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-03-17 00:32:20.106523 | orchestrator | Tuesday 17 March 2026 00:28:53 +0000 (0:00:00.320) 0:01:04.474 ********* 2026-03-17 00:32:20.106640 | orchestrator | ok: [testbed-manager] 2026-03-17 00:32:20.106657 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:32:20.106670 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:32:20.106682 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:32:20.106692 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:32:20.106703 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:32:20.106714 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:32:20.106725 | orchestrator | 2026-03-17 00:32:20.106737 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-03-17 00:32:20.106773 | orchestrator | Tuesday 17 March 2026 00:28:55 +0000 (0:00:01.950) 0:01:06.424 ********* 2026-03-17 00:32:20.106785 | orchestrator | changed: [testbed-manager] 2026-03-17 00:32:20.106797 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:32:20.106807 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:32:20.106818 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:32:20.106829 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:32:20.106839 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:32:20.106850 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:32:20.106861 | orchestrator | 2026-03-17 00:32:20.106872 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-03-17 00:32:20.106884 | orchestrator | Tuesday 17 March 2026 00:28:55 +0000 (0:00:00.561) 0:01:06.985 ********* 2026-03-17 00:32:20.106894 | orchestrator | ok: [testbed-manager] 2026-03-17 00:32:20.106905 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:32:20.106916 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:32:20.106968 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:32:20.106979 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:32:20.106990 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:32:20.107000 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:32:20.107011 | orchestrator | 2026-03-17 00:32:20.107022 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-03-17 00:32:20.107033 | orchestrator | Tuesday 17 March 2026 00:28:56 +0000 (0:00:00.204) 0:01:07.190 ********* 2026-03-17 00:32:20.107044 | orchestrator | ok: [testbed-manager] 2026-03-17 00:32:20.107055 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:32:20.107068 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:32:20.107080 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:32:20.107092 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:32:20.107116 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:32:20.107128 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:32:20.107140 | orchestrator | 2026-03-17 00:32:20.107153 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-03-17 00:32:20.107165 | orchestrator | Tuesday 17 March 2026 00:28:57 +0000 (0:00:01.252) 0:01:08.443 ********* 2026-03-17 00:32:20.107178 | orchestrator | changed: [testbed-manager] 2026-03-17 00:32:20.107190 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:32:20.107202 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:32:20.107214 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:32:20.107226 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:32:20.107238 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:32:20.107250 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:32:20.107263 | orchestrator | 2026-03-17 00:32:20.107275 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-03-17 00:32:20.107287 | orchestrator | Tuesday 17 March 2026 00:28:59 +0000 (0:00:02.128) 0:01:10.572 ********* 2026-03-17 00:32:20.107299 | orchestrator | ok: [testbed-manager] 2026-03-17 00:32:20.107311 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:32:20.107324 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:32:20.107336 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:32:20.107348 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:32:20.107361 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:32:20.107373 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:32:20.107385 | orchestrator | 2026-03-17 00:32:20.107398 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-03-17 00:32:20.107410 | orchestrator | Tuesday 17 March 2026 00:29:02 +0000 (0:00:02.684) 0:01:13.257 ********* 2026-03-17 00:32:20.107420 | orchestrator | ok: [testbed-manager] 2026-03-17 00:32:20.107431 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:32:20.107442 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:32:20.107452 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:32:20.107463 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:32:20.107473 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:32:20.107484 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:32:20.107494 | orchestrator | 2026-03-17 00:32:20.107505 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-03-17 00:32:20.107525 | orchestrator | Tuesday 17 March 2026 00:30:50 +0000 (0:01:48.505) 0:03:01.763 ********* 2026-03-17 00:32:20.107536 | orchestrator | changed: [testbed-manager] 2026-03-17 00:32:20.107547 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:32:20.107558 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:32:20.107568 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:32:20.107579 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:32:20.107590 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:32:20.107615 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:32:20.107625 | orchestrator | 2026-03-17 00:32:20.107646 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-03-17 00:32:20.107657 | orchestrator | Tuesday 17 March 2026 00:32:06 +0000 (0:01:15.825) 0:04:17.588 ********* 2026-03-17 00:32:20.107668 | orchestrator | ok: [testbed-manager] 2026-03-17 00:32:20.107678 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:32:20.107689 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:32:20.107700 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:32:20.107710 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:32:20.107721 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:32:20.107732 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:32:20.107743 | orchestrator | 2026-03-17 00:32:20.107754 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-03-17 00:32:20.107765 | orchestrator | Tuesday 17 March 2026 00:32:08 +0000 (0:00:01.986) 0:04:19.575 ********* 2026-03-17 00:32:20.107776 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:32:20.107787 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:32:20.107797 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:32:20.107808 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:32:20.107818 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:32:20.107829 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:32:20.107839 | orchestrator | changed: [testbed-manager] 2026-03-17 00:32:20.107850 | orchestrator | 2026-03-17 00:32:20.107861 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-03-17 00:32:20.107872 | orchestrator | Tuesday 17 March 2026 00:32:19 +0000 (0:00:10.492) 0:04:30.068 ********* 2026-03-17 00:32:20.107916 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-03-17 00:32:20.107961 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-03-17 00:32:20.107977 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-03-17 00:32:20.107990 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-03-17 00:32:20.108009 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-03-17 00:32:20.108020 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-03-17 00:32:20.108037 | orchestrator | 2026-03-17 00:32:20.108049 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-03-17 00:32:20.108060 | orchestrator | Tuesday 17 March 2026 00:32:19 +0000 (0:00:00.402) 0:04:30.470 ********* 2026-03-17 00:32:20.108071 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-17 00:32:20.108081 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:32:20.108092 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-17 00:32:20.108103 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-17 00:32:20.108114 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:32:20.108125 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:32:20.108147 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-17 00:32:20.108158 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:32:20.108169 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-17 00:32:20.108180 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-17 00:32:20.108191 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-17 00:32:20.108202 | orchestrator | 2026-03-17 00:32:20.108212 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-03-17 00:32:20.108228 | orchestrator | Tuesday 17 March 2026 00:32:20 +0000 (0:00:00.632) 0:04:31.103 ********* 2026-03-17 00:32:20.108239 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-17 00:32:20.108251 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-17 00:32:20.108262 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-17 00:32:20.108273 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-17 00:32:20.108284 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-17 00:32:20.108301 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-17 00:32:28.945004 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-17 00:32:28.945106 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-17 00:32:28.945118 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-17 00:32:28.945128 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-17 00:32:28.945143 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:32:28.945159 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-17 00:32:28.945174 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-17 00:32:28.945190 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-17 00:32:28.945231 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-17 00:32:28.945247 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-17 00:32:28.945263 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-17 00:32:28.945278 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-17 00:32:28.945293 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-17 00:32:28.945306 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-17 00:32:28.945321 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-17 00:32:28.945336 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-17 00:32:28.945350 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-17 00:32:28.945365 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-17 00:32:28.945378 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-17 00:32:28.945393 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-17 00:32:28.945408 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-17 00:32:28.945424 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:32:28.945438 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-17 00:32:28.945454 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-17 00:32:28.945468 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-17 00:32:28.945484 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-17 00:32:28.945496 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:32:28.945506 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-17 00:32:28.945516 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-17 00:32:28.945526 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-17 00:32:28.945537 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-17 00:32:28.945546 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-17 00:32:28.945556 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-17 00:32:28.945566 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-17 00:32:28.945575 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-17 00:32:28.945584 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-17 00:32:28.945607 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-17 00:32:28.945618 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:32:28.945628 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-17 00:32:28.945638 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-17 00:32:28.945647 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-17 00:32:28.945666 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-17 00:32:28.945677 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-17 00:32:28.945703 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-17 00:32:28.945713 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-17 00:32:28.945721 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-17 00:32:28.945730 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-17 00:32:28.945739 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-17 00:32:28.945747 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-17 00:32:28.945756 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-17 00:32:28.945764 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-17 00:32:28.945773 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-17 00:32:28.945782 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-17 00:32:28.945790 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-17 00:32:28.945799 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-17 00:32:28.945807 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-17 00:32:28.945815 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-17 00:32:28.945824 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-17 00:32:28.945832 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-17 00:32:28.945841 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-17 00:32:28.945849 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-17 00:32:28.945858 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-17 00:32:28.945867 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-17 00:32:28.945880 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-17 00:32:28.945895 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-17 00:32:28.945909 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-17 00:32:28.945948 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-17 00:32:28.945962 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-17 00:32:28.945975 | orchestrator | 2026-03-17 00:32:28.945992 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-03-17 00:32:28.946006 | orchestrator | Tuesday 17 March 2026 00:32:26 +0000 (0:00:06.722) 0:04:37.825 ********* 2026-03-17 00:32:28.946079 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-17 00:32:28.946091 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-17 00:32:28.946100 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-17 00:32:28.946109 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-17 00:32:28.946126 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-17 00:32:28.946134 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-17 00:32:28.946143 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-17 00:32:28.946152 | orchestrator | 2026-03-17 00:32:28.946160 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-03-17 00:32:28.946169 | orchestrator | Tuesday 17 March 2026 00:32:28 +0000 (0:00:01.549) 0:04:39.374 ********* 2026-03-17 00:32:28.946177 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-17 00:32:28.946186 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:32:28.946201 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-17 00:32:28.946210 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:32:28.946219 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-17 00:32:28.946228 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:32:28.946236 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-17 00:32:28.946245 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:32:28.946253 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-17 00:32:28.946262 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-17 00:32:28.946279 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-17 00:32:42.407896 | orchestrator | 2026-03-17 00:32:42.408062 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-03-17 00:32:42.408092 | orchestrator | Tuesday 17 March 2026 00:32:28 +0000 (0:00:00.657) 0:04:40.032 ********* 2026-03-17 00:32:42.408112 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-17 00:32:42.408132 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:32:42.408145 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-17 00:32:42.408156 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:32:42.408167 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-17 00:32:42.408178 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:32:42.408189 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-17 00:32:42.408200 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:32:42.408212 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-17 00:32:42.408223 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-17 00:32:42.408234 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-17 00:32:42.408244 | orchestrator | 2026-03-17 00:32:42.408256 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-03-17 00:32:42.408267 | orchestrator | Tuesday 17 March 2026 00:32:30 +0000 (0:00:01.518) 0:04:41.550 ********* 2026-03-17 00:32:42.408278 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-17 00:32:42.408289 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:32:42.408299 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-17 00:32:42.408310 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:32:42.408321 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-17 00:32:42.408359 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:32:42.408371 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-17 00:32:42.408382 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:32:42.408393 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-17 00:32:42.408403 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-17 00:32:42.408414 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-17 00:32:42.408427 | orchestrator | 2026-03-17 00:32:42.408439 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-03-17 00:32:42.408452 | orchestrator | Tuesday 17 March 2026 00:32:31 +0000 (0:00:00.814) 0:04:42.364 ********* 2026-03-17 00:32:42.408465 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:32:42.408477 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:32:42.408490 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:32:42.408503 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:32:42.408515 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:32:42.408527 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:32:42.408540 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:32:42.408552 | orchestrator | 2026-03-17 00:32:42.408565 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-03-17 00:32:42.408577 | orchestrator | Tuesday 17 March 2026 00:32:31 +0000 (0:00:00.304) 0:04:42.668 ********* 2026-03-17 00:32:42.408590 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:32:42.408602 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:32:42.408615 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:32:42.408627 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:32:42.408639 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:32:42.408651 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:32:42.408663 | orchestrator | ok: [testbed-manager] 2026-03-17 00:32:42.408675 | orchestrator | 2026-03-17 00:32:42.408687 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-03-17 00:32:42.408698 | orchestrator | Tuesday 17 March 2026 00:32:36 +0000 (0:00:05.146) 0:04:47.815 ********* 2026-03-17 00:32:42.408709 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-03-17 00:32:42.408721 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-03-17 00:32:42.408731 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:32:42.408744 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-03-17 00:32:42.408762 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:32:42.408785 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-03-17 00:32:42.408812 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:32:42.408830 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-03-17 00:32:42.408848 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:32:42.408865 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:32:42.408881 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-03-17 00:32:42.408899 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:32:42.409038 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-03-17 00:32:42.409058 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:32:42.409074 | orchestrator | 2026-03-17 00:32:42.409092 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-03-17 00:32:42.409109 | orchestrator | Tuesday 17 March 2026 00:32:37 +0000 (0:00:00.277) 0:04:48.092 ********* 2026-03-17 00:32:42.409125 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-03-17 00:32:42.409144 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-03-17 00:32:42.409163 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-03-17 00:32:42.409205 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-03-17 00:32:42.409225 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-03-17 00:32:42.409244 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-03-17 00:32:42.409281 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-03-17 00:32:42.409301 | orchestrator | 2026-03-17 00:32:42.409320 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-03-17 00:32:42.409339 | orchestrator | Tuesday 17 March 2026 00:32:38 +0000 (0:00:01.161) 0:04:49.254 ********* 2026-03-17 00:32:42.409353 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:32:42.409367 | orchestrator | 2026-03-17 00:32:42.409378 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-03-17 00:32:42.409389 | orchestrator | Tuesday 17 March 2026 00:32:38 +0000 (0:00:00.385) 0:04:49.639 ********* 2026-03-17 00:32:42.409400 | orchestrator | ok: [testbed-manager] 2026-03-17 00:32:42.409411 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:32:42.409421 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:32:42.409432 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:32:42.409443 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:32:42.409454 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:32:42.409464 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:32:42.409475 | orchestrator | 2026-03-17 00:32:42.409486 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-03-17 00:32:42.409496 | orchestrator | Tuesday 17 March 2026 00:32:39 +0000 (0:00:01.397) 0:04:51.037 ********* 2026-03-17 00:32:42.409507 | orchestrator | ok: [testbed-manager] 2026-03-17 00:32:42.409518 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:32:42.409529 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:32:42.409539 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:32:42.409549 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:32:42.409560 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:32:42.409591 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:32:42.409602 | orchestrator | 2026-03-17 00:32:42.409613 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-03-17 00:32:42.409624 | orchestrator | Tuesday 17 March 2026 00:32:40 +0000 (0:00:00.607) 0:04:51.644 ********* 2026-03-17 00:32:42.409635 | orchestrator | changed: [testbed-manager] 2026-03-17 00:32:42.409646 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:32:42.409657 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:32:42.409667 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:32:42.409680 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:32:42.409699 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:32:42.409717 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:32:42.409734 | orchestrator | 2026-03-17 00:32:42.409751 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-03-17 00:32:42.409769 | orchestrator | Tuesday 17 March 2026 00:32:41 +0000 (0:00:00.671) 0:04:52.316 ********* 2026-03-17 00:32:42.409787 | orchestrator | ok: [testbed-manager] 2026-03-17 00:32:42.409803 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:32:42.409821 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:32:42.409839 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:32:42.409858 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:32:42.409877 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:32:42.409895 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:32:42.409946 | orchestrator | 2026-03-17 00:32:42.409964 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-03-17 00:32:42.409981 | orchestrator | Tuesday 17 March 2026 00:32:41 +0000 (0:00:00.607) 0:04:52.923 ********* 2026-03-17 00:32:42.410002 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773705937.275596, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-17 00:32:42.410124 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773705935.3151107, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-17 00:32:42.410144 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773705952.3980846, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-17 00:32:42.410184 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773705969.643912, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-17 00:32:47.957414 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773705947.569836, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-17 00:32:47.957522 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773705934.5155773, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-17 00:32:47.957539 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773705961.8944342, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-17 00:32:47.957551 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-17 00:32:47.957588 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-17 00:32:47.957616 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-17 00:32:47.957629 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-17 00:32:47.957658 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-17 00:32:47.957672 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-17 00:32:47.957692 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-17 00:32:47.957712 | orchestrator | 2026-03-17 00:32:47.957732 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-03-17 00:32:47.957753 | orchestrator | Tuesday 17 March 2026 00:32:42 +0000 (0:00:00.999) 0:04:53.923 ********* 2026-03-17 00:32:47.957772 | orchestrator | changed: [testbed-manager] 2026-03-17 00:32:47.957793 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:32:47.957811 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:32:47.957845 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:32:47.957865 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:32:47.957878 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:32:47.957889 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:32:47.957932 | orchestrator | 2026-03-17 00:32:47.957951 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-03-17 00:32:47.957971 | orchestrator | Tuesday 17 March 2026 00:32:43 +0000 (0:00:01.135) 0:04:55.059 ********* 2026-03-17 00:32:47.957988 | orchestrator | changed: [testbed-manager] 2026-03-17 00:32:47.958005 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:32:47.958109 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:32:47.958131 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:32:47.958151 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:32:47.958170 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:32:47.958194 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:32:47.958231 | orchestrator | 2026-03-17 00:32:47.958255 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-03-17 00:32:47.958273 | orchestrator | Tuesday 17 March 2026 00:32:45 +0000 (0:00:01.227) 0:04:56.287 ********* 2026-03-17 00:32:47.958292 | orchestrator | changed: [testbed-manager] 2026-03-17 00:32:47.958312 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:32:47.958331 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:32:47.958348 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:32:47.958366 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:32:47.958377 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:32:47.958388 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:32:47.958399 | orchestrator | 2026-03-17 00:32:47.958409 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-03-17 00:32:47.958430 | orchestrator | Tuesday 17 March 2026 00:32:46 +0000 (0:00:01.351) 0:04:57.638 ********* 2026-03-17 00:32:47.958441 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:32:47.958452 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:32:47.958463 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:32:47.958474 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:32:47.958484 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:32:47.958495 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:32:47.958505 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:32:47.958516 | orchestrator | 2026-03-17 00:32:47.958527 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-03-17 00:32:47.958537 | orchestrator | Tuesday 17 March 2026 00:32:46 +0000 (0:00:00.274) 0:04:57.913 ********* 2026-03-17 00:32:47.958548 | orchestrator | ok: [testbed-manager] 2026-03-17 00:32:47.958560 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:32:47.958571 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:32:47.958582 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:32:47.958592 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:32:47.958603 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:32:47.958613 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:32:47.958624 | orchestrator | 2026-03-17 00:32:47.958635 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-03-17 00:32:47.958646 | orchestrator | Tuesday 17 March 2026 00:32:47 +0000 (0:00:00.737) 0:04:58.651 ********* 2026-03-17 00:32:47.958659 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:32:47.958672 | orchestrator | 2026-03-17 00:32:47.958683 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-03-17 00:32:47.958708 | orchestrator | Tuesday 17 March 2026 00:32:47 +0000 (0:00:00.363) 0:04:59.015 ********* 2026-03-17 00:34:09.334918 | orchestrator | ok: [testbed-manager] 2026-03-17 00:34:09.335021 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:34:09.335032 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:34:09.335038 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:34:09.335067 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:34:09.335073 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:34:09.335079 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:34:09.335085 | orchestrator | 2026-03-17 00:34:09.335092 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-03-17 00:34:09.335100 | orchestrator | Tuesday 17 March 2026 00:32:57 +0000 (0:00:09.393) 0:05:08.408 ********* 2026-03-17 00:34:09.335106 | orchestrator | ok: [testbed-manager] 2026-03-17 00:34:09.335112 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:34:09.335118 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:34:09.335124 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:34:09.335130 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:34:09.335136 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:34:09.335141 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:34:09.335146 | orchestrator | 2026-03-17 00:34:09.335152 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-03-17 00:34:09.335158 | orchestrator | Tuesday 17 March 2026 00:32:58 +0000 (0:00:01.267) 0:05:09.676 ********* 2026-03-17 00:34:09.335164 | orchestrator | ok: [testbed-manager] 2026-03-17 00:34:09.335170 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:34:09.335176 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:34:09.335182 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:34:09.335188 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:34:09.335194 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:34:09.335200 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:34:09.335206 | orchestrator | 2026-03-17 00:34:09.335212 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-03-17 00:34:09.335219 | orchestrator | Tuesday 17 March 2026 00:32:59 +0000 (0:00:01.020) 0:05:10.696 ********* 2026-03-17 00:34:09.335225 | orchestrator | ok: [testbed-manager] 2026-03-17 00:34:09.335231 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:34:09.335238 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:34:09.335244 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:34:09.335248 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:34:09.335251 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:34:09.335255 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:34:09.335259 | orchestrator | 2026-03-17 00:34:09.335263 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-03-17 00:34:09.335268 | orchestrator | Tuesday 17 March 2026 00:32:59 +0000 (0:00:00.301) 0:05:10.998 ********* 2026-03-17 00:34:09.335271 | orchestrator | ok: [testbed-manager] 2026-03-17 00:34:09.335275 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:34:09.335279 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:34:09.335283 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:34:09.335286 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:34:09.335290 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:34:09.335294 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:34:09.335297 | orchestrator | 2026-03-17 00:34:09.335301 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-03-17 00:34:09.335305 | orchestrator | Tuesday 17 March 2026 00:33:00 +0000 (0:00:00.286) 0:05:11.285 ********* 2026-03-17 00:34:09.335309 | orchestrator | ok: [testbed-manager] 2026-03-17 00:34:09.335312 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:34:09.335316 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:34:09.335320 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:34:09.335323 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:34:09.335327 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:34:09.335330 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:34:09.335334 | orchestrator | 2026-03-17 00:34:09.335338 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-03-17 00:34:09.335342 | orchestrator | Tuesday 17 March 2026 00:33:00 +0000 (0:00:00.298) 0:05:11.583 ********* 2026-03-17 00:34:09.335346 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:34:09.335349 | orchestrator | ok: [testbed-manager] 2026-03-17 00:34:09.335353 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:34:09.335362 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:34:09.335366 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:34:09.335370 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:34:09.335373 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:34:09.335377 | orchestrator | 2026-03-17 00:34:09.335381 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-03-17 00:34:09.335384 | orchestrator | Tuesday 17 March 2026 00:33:05 +0000 (0:00:05.166) 0:05:16.750 ********* 2026-03-17 00:34:09.335391 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:34:09.335406 | orchestrator | 2026-03-17 00:34:09.335410 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-03-17 00:34:09.335415 | orchestrator | Tuesday 17 March 2026 00:33:06 +0000 (0:00:00.406) 0:05:17.156 ********* 2026-03-17 00:34:09.335419 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-03-17 00:34:09.335423 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-03-17 00:34:09.335428 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-03-17 00:34:09.335432 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-03-17 00:34:09.335437 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:34:09.335441 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-03-17 00:34:09.335445 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-03-17 00:34:09.335450 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:34:09.335454 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-03-17 00:34:09.335458 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:34:09.335462 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-03-17 00:34:09.335467 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-03-17 00:34:09.335471 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-03-17 00:34:09.335475 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:34:09.335479 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:34:09.335484 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-03-17 00:34:09.335500 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-03-17 00:34:09.335505 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:34:09.335509 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-03-17 00:34:09.335514 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-03-17 00:34:09.335518 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:34:09.335522 | orchestrator | 2026-03-17 00:34:09.335527 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-03-17 00:34:09.335531 | orchestrator | Tuesday 17 March 2026 00:33:06 +0000 (0:00:00.327) 0:05:17.483 ********* 2026-03-17 00:34:09.335536 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:34:09.335540 | orchestrator | 2026-03-17 00:34:09.335544 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-03-17 00:34:09.335549 | orchestrator | Tuesday 17 March 2026 00:33:06 +0000 (0:00:00.431) 0:05:17.915 ********* 2026-03-17 00:34:09.335553 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-03-17 00:34:09.335557 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-03-17 00:34:09.335561 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:34:09.335565 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-03-17 00:34:09.335580 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:34:09.335585 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-03-17 00:34:09.335589 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:34:09.335599 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-03-17 00:34:09.335604 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:34:09.335608 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:34:09.335612 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-03-17 00:34:09.335616 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:34:09.335620 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-03-17 00:34:09.335625 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:34:09.335629 | orchestrator | 2026-03-17 00:34:09.335633 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-03-17 00:34:09.335638 | orchestrator | Tuesday 17 March 2026 00:33:07 +0000 (0:00:00.301) 0:05:18.216 ********* 2026-03-17 00:34:09.335642 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:34:09.335646 | orchestrator | 2026-03-17 00:34:09.335651 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-03-17 00:34:09.335655 | orchestrator | Tuesday 17 March 2026 00:33:07 +0000 (0:00:00.392) 0:05:18.609 ********* 2026-03-17 00:34:09.335659 | orchestrator | changed: [testbed-manager] 2026-03-17 00:34:09.335664 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:34:09.335668 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:34:09.335672 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:34:09.335676 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:34:09.335680 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:34:09.335684 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:34:09.335687 | orchestrator | 2026-03-17 00:34:09.335691 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-03-17 00:34:09.335695 | orchestrator | Tuesday 17 March 2026 00:33:41 +0000 (0:00:34.432) 0:05:53.042 ********* 2026-03-17 00:34:09.335699 | orchestrator | changed: [testbed-manager] 2026-03-17 00:34:09.335702 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:34:09.335706 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:34:09.335710 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:34:09.335714 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:34:09.335717 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:34:09.335721 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:34:09.335725 | orchestrator | 2026-03-17 00:34:09.335731 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-03-17 00:34:09.335735 | orchestrator | Tuesday 17 March 2026 00:33:51 +0000 (0:00:09.589) 0:06:02.632 ********* 2026-03-17 00:34:09.335738 | orchestrator | changed: [testbed-manager] 2026-03-17 00:34:09.335742 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:34:09.335746 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:34:09.335749 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:34:09.335753 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:34:09.335757 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:34:09.335760 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:34:09.335764 | orchestrator | 2026-03-17 00:34:09.335768 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-03-17 00:34:09.335772 | orchestrator | Tuesday 17 March 2026 00:33:59 +0000 (0:00:08.357) 0:06:10.989 ********* 2026-03-17 00:34:09.335775 | orchestrator | ok: [testbed-manager] 2026-03-17 00:34:09.335779 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:34:09.335783 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:34:09.335787 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:34:09.335790 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:34:09.335794 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:34:09.335798 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:34:09.335801 | orchestrator | 2026-03-17 00:34:09.335805 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-03-17 00:34:09.335823 | orchestrator | Tuesday 17 March 2026 00:34:01 +0000 (0:00:01.885) 0:06:12.875 ********* 2026-03-17 00:34:09.335828 | orchestrator | changed: [testbed-manager] 2026-03-17 00:34:09.335831 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:34:09.335835 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:34:09.335839 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:34:09.335843 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:34:09.335846 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:34:09.335850 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:34:09.335854 | orchestrator | 2026-03-17 00:34:09.335860 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-03-17 00:34:20.624657 | orchestrator | Tuesday 17 March 2026 00:34:09 +0000 (0:00:07.514) 0:06:20.389 ********* 2026-03-17 00:34:20.624764 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:34:20.624781 | orchestrator | 2026-03-17 00:34:20.624794 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-03-17 00:34:20.624886 | orchestrator | Tuesday 17 March 2026 00:34:09 +0000 (0:00:00.429) 0:06:20.819 ********* 2026-03-17 00:34:20.624899 | orchestrator | changed: [testbed-manager] 2026-03-17 00:34:20.624911 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:34:20.624922 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:34:20.624935 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:34:20.624954 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:34:20.624973 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:34:20.624992 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:34:20.625010 | orchestrator | 2026-03-17 00:34:20.625028 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-03-17 00:34:20.625046 | orchestrator | Tuesday 17 March 2026 00:34:10 +0000 (0:00:00.722) 0:06:21.542 ********* 2026-03-17 00:34:20.625064 | orchestrator | ok: [testbed-manager] 2026-03-17 00:34:20.625082 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:34:20.625098 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:34:20.625115 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:34:20.625133 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:34:20.625150 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:34:20.625167 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:34:20.625184 | orchestrator | 2026-03-17 00:34:20.625203 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-03-17 00:34:20.625221 | orchestrator | Tuesday 17 March 2026 00:34:12 +0000 (0:00:01.940) 0:06:23.483 ********* 2026-03-17 00:34:20.625238 | orchestrator | changed: [testbed-manager] 2026-03-17 00:34:20.625256 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:34:20.625273 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:34:20.625290 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:34:20.625308 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:34:20.625327 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:34:20.625346 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:34:20.625364 | orchestrator | 2026-03-17 00:34:20.625384 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-03-17 00:34:20.625404 | orchestrator | Tuesday 17 March 2026 00:34:13 +0000 (0:00:00.867) 0:06:24.351 ********* 2026-03-17 00:34:20.625423 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:34:20.625441 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:34:20.625459 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:34:20.625477 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:34:20.625495 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:34:20.625514 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:34:20.625533 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:34:20.625551 | orchestrator | 2026-03-17 00:34:20.625567 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-03-17 00:34:20.625584 | orchestrator | Tuesday 17 March 2026 00:34:13 +0000 (0:00:00.252) 0:06:24.603 ********* 2026-03-17 00:34:20.625636 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:34:20.625656 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:34:20.625674 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:34:20.625693 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:34:20.625710 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:34:20.625728 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:34:20.625745 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:34:20.625762 | orchestrator | 2026-03-17 00:34:20.625781 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-03-17 00:34:20.625828 | orchestrator | Tuesday 17 March 2026 00:34:13 +0000 (0:00:00.380) 0:06:24.984 ********* 2026-03-17 00:34:20.625846 | orchestrator | ok: [testbed-manager] 2026-03-17 00:34:20.625864 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:34:20.625882 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:34:20.625894 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:34:20.625904 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:34:20.625915 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:34:20.625942 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:34:20.625954 | orchestrator | 2026-03-17 00:34:20.625965 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-03-17 00:34:20.625975 | orchestrator | Tuesday 17 March 2026 00:34:14 +0000 (0:00:00.409) 0:06:25.393 ********* 2026-03-17 00:34:20.625986 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:34:20.625997 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:34:20.626008 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:34:20.626082 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:34:20.626093 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:34:20.626104 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:34:20.626114 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:34:20.626125 | orchestrator | 2026-03-17 00:34:20.626136 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-03-17 00:34:20.626148 | orchestrator | Tuesday 17 March 2026 00:34:14 +0000 (0:00:00.274) 0:06:25.667 ********* 2026-03-17 00:34:20.626158 | orchestrator | ok: [testbed-manager] 2026-03-17 00:34:20.626169 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:34:20.626179 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:34:20.626190 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:34:20.626201 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:34:20.626211 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:34:20.626222 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:34:20.626233 | orchestrator | 2026-03-17 00:34:20.626243 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-03-17 00:34:20.626254 | orchestrator | Tuesday 17 March 2026 00:34:14 +0000 (0:00:00.291) 0:06:25.959 ********* 2026-03-17 00:34:20.626265 | orchestrator | ok: [testbed-manager] =>  2026-03-17 00:34:20.626276 | orchestrator |  docker_version: 5:27.5.1 2026-03-17 00:34:20.626286 | orchestrator | ok: [testbed-node-0] =>  2026-03-17 00:34:20.626297 | orchestrator |  docker_version: 5:27.5.1 2026-03-17 00:34:20.626308 | orchestrator | ok: [testbed-node-1] =>  2026-03-17 00:34:20.626319 | orchestrator |  docker_version: 5:27.5.1 2026-03-17 00:34:20.626329 | orchestrator | ok: [testbed-node-2] =>  2026-03-17 00:34:20.626340 | orchestrator |  docker_version: 5:27.5.1 2026-03-17 00:34:20.626375 | orchestrator | ok: [testbed-node-3] =>  2026-03-17 00:34:20.626387 | orchestrator |  docker_version: 5:27.5.1 2026-03-17 00:34:20.626398 | orchestrator | ok: [testbed-node-4] =>  2026-03-17 00:34:20.626408 | orchestrator |  docker_version: 5:27.5.1 2026-03-17 00:34:20.626419 | orchestrator | ok: [testbed-node-5] =>  2026-03-17 00:34:20.626430 | orchestrator |  docker_version: 5:27.5.1 2026-03-17 00:34:20.626440 | orchestrator | 2026-03-17 00:34:20.626451 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-03-17 00:34:20.626462 | orchestrator | Tuesday 17 March 2026 00:34:15 +0000 (0:00:00.249) 0:06:26.209 ********* 2026-03-17 00:34:20.626472 | orchestrator | ok: [testbed-manager] =>  2026-03-17 00:34:20.626495 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-17 00:34:20.626506 | orchestrator | ok: [testbed-node-0] =>  2026-03-17 00:34:20.626516 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-17 00:34:20.626527 | orchestrator | ok: [testbed-node-1] =>  2026-03-17 00:34:20.626537 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-17 00:34:20.626548 | orchestrator | ok: [testbed-node-2] =>  2026-03-17 00:34:20.626559 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-17 00:34:20.626569 | orchestrator | ok: [testbed-node-3] =>  2026-03-17 00:34:20.626579 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-17 00:34:20.626595 | orchestrator | ok: [testbed-node-4] =>  2026-03-17 00:34:20.626613 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-17 00:34:20.626632 | orchestrator | ok: [testbed-node-5] =>  2026-03-17 00:34:20.626649 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-17 00:34:20.626667 | orchestrator | 2026-03-17 00:34:20.626685 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-03-17 00:34:20.626704 | orchestrator | Tuesday 17 March 2026 00:34:15 +0000 (0:00:00.268) 0:06:26.477 ********* 2026-03-17 00:34:20.626723 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:34:20.626742 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:34:20.626760 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:34:20.626776 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:34:20.626788 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:34:20.626822 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:34:20.626841 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:34:20.626852 | orchestrator | 2026-03-17 00:34:20.626863 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-03-17 00:34:20.626874 | orchestrator | Tuesday 17 March 2026 00:34:15 +0000 (0:00:00.242) 0:06:26.720 ********* 2026-03-17 00:34:20.626885 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:34:20.626896 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:34:20.626906 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:34:20.626917 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:34:20.626928 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:34:20.626938 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:34:20.626949 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:34:20.626965 | orchestrator | 2026-03-17 00:34:20.626981 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-03-17 00:34:20.627000 | orchestrator | Tuesday 17 March 2026 00:34:15 +0000 (0:00:00.225) 0:06:26.945 ********* 2026-03-17 00:34:20.627021 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:34:20.627043 | orchestrator | 2026-03-17 00:34:20.627061 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-03-17 00:34:20.627073 | orchestrator | Tuesday 17 March 2026 00:34:16 +0000 (0:00:00.397) 0:06:27.343 ********* 2026-03-17 00:34:20.627084 | orchestrator | ok: [testbed-manager] 2026-03-17 00:34:20.627094 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:34:20.627105 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:34:20.627116 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:34:20.627127 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:34:20.627137 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:34:20.627148 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:34:20.627159 | orchestrator | 2026-03-17 00:34:20.627170 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-03-17 00:34:20.627181 | orchestrator | Tuesday 17 March 2026 00:34:17 +0000 (0:00:00.882) 0:06:28.225 ********* 2026-03-17 00:34:20.627191 | orchestrator | ok: [testbed-manager] 2026-03-17 00:34:20.627210 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:34:20.627222 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:34:20.627232 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:34:20.627243 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:34:20.627262 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:34:20.627273 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:34:20.627284 | orchestrator | 2026-03-17 00:34:20.627295 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-03-17 00:34:20.627307 | orchestrator | Tuesday 17 March 2026 00:34:20 +0000 (0:00:03.112) 0:06:31.337 ********* 2026-03-17 00:34:20.627318 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-03-17 00:34:20.627329 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-03-17 00:34:20.627340 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-03-17 00:34:20.627350 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-03-17 00:34:20.627361 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-03-17 00:34:20.627372 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:34:20.627383 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-03-17 00:34:20.627394 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-03-17 00:34:20.627405 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-03-17 00:34:20.627415 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-03-17 00:34:20.627426 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:34:20.627437 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-03-17 00:34:20.627447 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-03-17 00:34:20.627458 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-03-17 00:34:20.627469 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:34:20.627480 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-03-17 00:34:20.627500 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-03-17 00:35:27.599701 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-03-17 00:35:27.599818 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:35:27.599831 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-03-17 00:35:27.599840 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-03-17 00:35:27.599849 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-03-17 00:35:27.599856 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:35:27.599865 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:35:27.599873 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-03-17 00:35:27.599881 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-03-17 00:35:27.599889 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-03-17 00:35:27.599897 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:35:27.599905 | orchestrator | 2026-03-17 00:35:27.599914 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-03-17 00:35:27.599924 | orchestrator | Tuesday 17 March 2026 00:34:20 +0000 (0:00:00.578) 0:06:31.916 ********* 2026-03-17 00:35:27.599932 | orchestrator | ok: [testbed-manager] 2026-03-17 00:35:27.599940 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:35:27.599948 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:35:27.599956 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:35:27.599963 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:35:27.599971 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:35:27.599979 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:35:27.599987 | orchestrator | 2026-03-17 00:35:27.599995 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-03-17 00:35:27.600003 | orchestrator | Tuesday 17 March 2026 00:34:28 +0000 (0:00:07.685) 0:06:39.601 ********* 2026-03-17 00:35:27.600011 | orchestrator | ok: [testbed-manager] 2026-03-17 00:35:27.600019 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:35:27.600027 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:35:27.600034 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:35:27.600042 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:35:27.600050 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:35:27.600078 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:35:27.600086 | orchestrator | 2026-03-17 00:35:27.600095 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-03-17 00:35:27.600103 | orchestrator | Tuesday 17 March 2026 00:34:29 +0000 (0:00:01.179) 0:06:40.781 ********* 2026-03-17 00:35:27.600110 | orchestrator | ok: [testbed-manager] 2026-03-17 00:35:27.600118 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:35:27.600126 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:35:27.600134 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:35:27.600141 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:35:27.600149 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:35:27.600157 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:35:27.600165 | orchestrator | 2026-03-17 00:35:27.600173 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-03-17 00:35:27.600181 | orchestrator | Tuesday 17 March 2026 00:34:38 +0000 (0:00:09.057) 0:06:49.839 ********* 2026-03-17 00:35:27.600189 | orchestrator | changed: [testbed-manager] 2026-03-17 00:35:27.600197 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:35:27.600205 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:35:27.600212 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:35:27.600220 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:35:27.600228 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:35:27.600236 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:35:27.600244 | orchestrator | 2026-03-17 00:35:27.600252 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-03-17 00:35:27.600260 | orchestrator | Tuesday 17 March 2026 00:34:42 +0000 (0:00:03.415) 0:06:53.254 ********* 2026-03-17 00:35:27.600267 | orchestrator | ok: [testbed-manager] 2026-03-17 00:35:27.600275 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:35:27.600283 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:35:27.600291 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:35:27.600298 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:35:27.600306 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:35:27.600314 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:35:27.600322 | orchestrator | 2026-03-17 00:35:27.600330 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-03-17 00:35:27.600350 | orchestrator | Tuesday 17 March 2026 00:34:43 +0000 (0:00:01.390) 0:06:54.645 ********* 2026-03-17 00:35:27.600358 | orchestrator | ok: [testbed-manager] 2026-03-17 00:35:27.600366 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:35:27.600374 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:35:27.600381 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:35:27.600389 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:35:27.600397 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:35:27.600405 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:35:27.600413 | orchestrator | 2026-03-17 00:35:27.600420 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-03-17 00:35:27.600428 | orchestrator | Tuesday 17 March 2026 00:34:44 +0000 (0:00:01.414) 0:06:56.059 ********* 2026-03-17 00:35:27.600436 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:35:27.600444 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:35:27.600453 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:35:27.600461 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:35:27.600468 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:35:27.600476 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:35:27.600484 | orchestrator | changed: [testbed-manager] 2026-03-17 00:35:27.600492 | orchestrator | 2026-03-17 00:35:27.600500 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-03-17 00:35:27.600508 | orchestrator | Tuesday 17 March 2026 00:34:45 +0000 (0:00:00.607) 0:06:56.667 ********* 2026-03-17 00:35:27.600516 | orchestrator | ok: [testbed-manager] 2026-03-17 00:35:27.600523 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:35:27.600531 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:35:27.600546 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:35:27.600554 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:35:27.600562 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:35:27.600569 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:35:27.600577 | orchestrator | 2026-03-17 00:35:27.600585 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-03-17 00:35:27.600608 | orchestrator | Tuesday 17 March 2026 00:34:55 +0000 (0:00:10.318) 0:07:06.986 ********* 2026-03-17 00:35:27.600617 | orchestrator | changed: [testbed-manager] 2026-03-17 00:35:27.600624 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:35:27.600632 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:35:27.600640 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:35:27.600648 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:35:27.600655 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:35:27.600663 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:35:27.600671 | orchestrator | 2026-03-17 00:35:27.600679 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-03-17 00:35:27.600687 | orchestrator | Tuesday 17 March 2026 00:34:57 +0000 (0:00:01.179) 0:07:08.166 ********* 2026-03-17 00:35:27.600694 | orchestrator | ok: [testbed-manager] 2026-03-17 00:35:27.600702 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:35:27.600731 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:35:27.600745 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:35:27.600753 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:35:27.600761 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:35:27.600770 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:35:27.600784 | orchestrator | 2026-03-17 00:35:27.600792 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-03-17 00:35:27.600802 | orchestrator | Tuesday 17 March 2026 00:35:07 +0000 (0:00:10.471) 0:07:18.637 ********* 2026-03-17 00:35:27.600814 | orchestrator | ok: [testbed-manager] 2026-03-17 00:35:27.600822 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:35:27.600835 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:35:27.600845 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:35:27.600853 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:35:27.600861 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:35:27.600868 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:35:27.600876 | orchestrator | 2026-03-17 00:35:27.600884 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-03-17 00:35:27.600892 | orchestrator | Tuesday 17 March 2026 00:35:20 +0000 (0:00:12.715) 0:07:31.353 ********* 2026-03-17 00:35:27.600900 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-03-17 00:35:27.600907 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-03-17 00:35:27.600915 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-03-17 00:35:27.600923 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-03-17 00:35:27.600931 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-03-17 00:35:27.600939 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-03-17 00:35:27.600946 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-03-17 00:35:27.600954 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-03-17 00:35:27.600962 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-03-17 00:35:27.600970 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-03-17 00:35:27.600977 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-03-17 00:35:27.600985 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-03-17 00:35:27.600993 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-03-17 00:35:27.601001 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-03-17 00:35:27.601009 | orchestrator | 2026-03-17 00:35:27.601016 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-03-17 00:35:27.601024 | orchestrator | Tuesday 17 March 2026 00:35:21 +0000 (0:00:01.214) 0:07:32.568 ********* 2026-03-17 00:35:27.601032 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:35:27.601071 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:35:27.601079 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:35:27.601087 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:35:27.601095 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:35:27.601103 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:35:27.601110 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:35:27.601118 | orchestrator | 2026-03-17 00:35:27.601126 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-03-17 00:35:27.601134 | orchestrator | Tuesday 17 March 2026 00:35:22 +0000 (0:00:00.636) 0:07:33.204 ********* 2026-03-17 00:35:27.601142 | orchestrator | ok: [testbed-manager] 2026-03-17 00:35:27.601149 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:35:27.601157 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:35:27.601165 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:35:27.601173 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:35:27.601181 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:35:27.601189 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:35:27.601196 | orchestrator | 2026-03-17 00:35:27.601205 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-03-17 00:35:27.601214 | orchestrator | Tuesday 17 March 2026 00:35:26 +0000 (0:00:04.710) 0:07:37.914 ********* 2026-03-17 00:35:27.601222 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:35:27.601230 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:35:27.601238 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:35:27.601245 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:35:27.601253 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:35:27.601261 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:35:27.601268 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:35:27.601276 | orchestrator | 2026-03-17 00:35:27.601317 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-03-17 00:35:27.601326 | orchestrator | Tuesday 17 March 2026 00:35:27 +0000 (0:00:00.496) 0:07:38.411 ********* 2026-03-17 00:35:27.601334 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-03-17 00:35:27.601343 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-03-17 00:35:27.601351 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:35:27.601359 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-03-17 00:35:27.601366 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-03-17 00:35:27.601374 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:35:27.601382 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-03-17 00:35:27.601390 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-03-17 00:35:27.601398 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:35:27.601412 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-03-17 00:35:47.720344 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-03-17 00:35:47.720460 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:35:47.720477 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-03-17 00:35:47.720489 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-03-17 00:35:47.720500 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:35:47.720511 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-03-17 00:35:47.720522 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-03-17 00:35:47.720533 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:35:47.720543 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-03-17 00:35:47.720554 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-03-17 00:35:47.720565 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:35:47.720576 | orchestrator | 2026-03-17 00:35:47.720589 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-03-17 00:35:47.720629 | orchestrator | Tuesday 17 March 2026 00:35:27 +0000 (0:00:00.530) 0:07:38.941 ********* 2026-03-17 00:35:47.720641 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:35:47.720652 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:35:47.720662 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:35:47.720673 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:35:47.720717 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:35:47.720737 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:35:47.720755 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:35:47.720774 | orchestrator | 2026-03-17 00:35:47.720786 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-03-17 00:35:47.720797 | orchestrator | Tuesday 17 March 2026 00:35:28 +0000 (0:00:00.470) 0:07:39.412 ********* 2026-03-17 00:35:47.720808 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:35:47.720818 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:35:47.720829 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:35:47.720840 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:35:47.720850 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:35:47.720861 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:35:47.720871 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:35:47.720882 | orchestrator | 2026-03-17 00:35:47.720894 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-03-17 00:35:47.720904 | orchestrator | Tuesday 17 March 2026 00:35:28 +0000 (0:00:00.625) 0:07:40.038 ********* 2026-03-17 00:35:47.720915 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:35:47.720925 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:35:47.720936 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:35:47.720947 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:35:47.720957 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:35:47.720968 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:35:47.720978 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:35:47.720989 | orchestrator | 2026-03-17 00:35:47.721000 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-03-17 00:35:47.721010 | orchestrator | Tuesday 17 March 2026 00:35:29 +0000 (0:00:00.508) 0:07:40.546 ********* 2026-03-17 00:35:47.721021 | orchestrator | ok: [testbed-manager] 2026-03-17 00:35:47.721032 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:35:47.721043 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:35:47.721054 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:35:47.721073 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:35:47.721090 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:35:47.721108 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:35:47.721128 | orchestrator | 2026-03-17 00:35:47.721147 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-03-17 00:35:47.721165 | orchestrator | Tuesday 17 March 2026 00:35:31 +0000 (0:00:01.762) 0:07:42.309 ********* 2026-03-17 00:35:47.721186 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:35:47.721207 | orchestrator | 2026-03-17 00:35:47.721235 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-03-17 00:35:47.721247 | orchestrator | Tuesday 17 March 2026 00:35:32 +0000 (0:00:00.786) 0:07:43.095 ********* 2026-03-17 00:35:47.721264 | orchestrator | ok: [testbed-manager] 2026-03-17 00:35:47.721282 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:35:47.721301 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:35:47.721318 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:35:47.721339 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:35:47.721357 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:35:47.721374 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:35:47.721392 | orchestrator | 2026-03-17 00:35:47.721410 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-03-17 00:35:47.721444 | orchestrator | Tuesday 17 March 2026 00:35:33 +0000 (0:00:01.100) 0:07:44.195 ********* 2026-03-17 00:35:47.721464 | orchestrator | ok: [testbed-manager] 2026-03-17 00:35:47.721482 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:35:47.721500 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:35:47.721514 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:35:47.721527 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:35:47.721545 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:35:47.721564 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:35:47.721583 | orchestrator | 2026-03-17 00:35:47.721602 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-03-17 00:35:47.721620 | orchestrator | Tuesday 17 March 2026 00:35:34 +0000 (0:00:00.885) 0:07:45.081 ********* 2026-03-17 00:35:47.721638 | orchestrator | ok: [testbed-manager] 2026-03-17 00:35:47.721657 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:35:47.721676 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:35:47.721721 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:35:47.721733 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:35:47.721744 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:35:47.721754 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:35:47.721770 | orchestrator | 2026-03-17 00:35:47.721789 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-03-17 00:35:47.721835 | orchestrator | Tuesday 17 March 2026 00:35:35 +0000 (0:00:01.357) 0:07:46.438 ********* 2026-03-17 00:35:47.721857 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:35:47.721878 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:35:47.721897 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:35:47.721911 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:35:47.721928 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:35:47.721947 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:35:47.721960 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:35:47.721971 | orchestrator | 2026-03-17 00:35:47.721982 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-03-17 00:35:47.721992 | orchestrator | Tuesday 17 March 2026 00:35:36 +0000 (0:00:01.542) 0:07:47.981 ********* 2026-03-17 00:35:47.722003 | orchestrator | ok: [testbed-manager] 2026-03-17 00:35:47.722014 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:35:47.722102 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:35:47.722121 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:35:47.722140 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:35:47.722158 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:35:47.722175 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:35:47.722186 | orchestrator | 2026-03-17 00:35:47.722197 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-03-17 00:35:47.722208 | orchestrator | Tuesday 17 March 2026 00:35:38 +0000 (0:00:01.386) 0:07:49.367 ********* 2026-03-17 00:35:47.722218 | orchestrator | changed: [testbed-manager] 2026-03-17 00:35:47.722229 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:35:47.722240 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:35:47.722250 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:35:47.722261 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:35:47.722271 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:35:47.722282 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:35:47.722293 | orchestrator | 2026-03-17 00:35:47.722303 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-03-17 00:35:47.722314 | orchestrator | Tuesday 17 March 2026 00:35:39 +0000 (0:00:01.599) 0:07:50.967 ********* 2026-03-17 00:35:47.722325 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:35:47.722337 | orchestrator | 2026-03-17 00:35:47.722348 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-03-17 00:35:47.722359 | orchestrator | Tuesday 17 March 2026 00:35:40 +0000 (0:00:00.902) 0:07:51.869 ********* 2026-03-17 00:35:47.722388 | orchestrator | ok: [testbed-manager] 2026-03-17 00:35:47.722399 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:35:47.722410 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:35:47.722420 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:35:47.722431 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:35:47.722442 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:35:47.722452 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:35:47.722463 | orchestrator | 2026-03-17 00:35:47.722474 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-03-17 00:35:47.722485 | orchestrator | Tuesday 17 March 2026 00:35:42 +0000 (0:00:01.591) 0:07:53.461 ********* 2026-03-17 00:35:47.722501 | orchestrator | ok: [testbed-manager] 2026-03-17 00:35:47.722518 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:35:47.722535 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:35:47.722546 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:35:47.722557 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:35:47.722567 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:35:47.722578 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:35:47.722589 | orchestrator | 2026-03-17 00:35:47.722599 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-03-17 00:35:47.722610 | orchestrator | Tuesday 17 March 2026 00:35:43 +0000 (0:00:01.315) 0:07:54.777 ********* 2026-03-17 00:35:47.722621 | orchestrator | ok: [testbed-manager] 2026-03-17 00:35:47.722632 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:35:47.722642 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:35:47.722653 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:35:47.722664 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:35:47.722674 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:35:47.722727 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:35:47.722740 | orchestrator | 2026-03-17 00:35:47.722799 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-03-17 00:35:47.722811 | orchestrator | Tuesday 17 March 2026 00:35:45 +0000 (0:00:01.379) 0:07:56.156 ********* 2026-03-17 00:35:47.722822 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:35:47.722833 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:35:47.722843 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:35:47.722854 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:35:47.722864 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:35:47.722875 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:35:47.722886 | orchestrator | ok: [testbed-manager] 2026-03-17 00:35:47.722896 | orchestrator | 2026-03-17 00:35:47.722907 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-03-17 00:35:47.722918 | orchestrator | Tuesday 17 March 2026 00:35:46 +0000 (0:00:01.626) 0:07:57.783 ********* 2026-03-17 00:35:47.722929 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:35:47.722941 | orchestrator | 2026-03-17 00:35:47.722952 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-17 00:35:47.722962 | orchestrator | Tuesday 17 March 2026 00:35:47 +0000 (0:00:00.785) 0:07:58.568 ********* 2026-03-17 00:35:47.722973 | orchestrator | 2026-03-17 00:35:47.722984 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-17 00:35:47.722995 | orchestrator | Tuesday 17 March 2026 00:35:47 +0000 (0:00:00.041) 0:07:58.610 ********* 2026-03-17 00:35:47.723005 | orchestrator | 2026-03-17 00:35:47.723016 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-17 00:35:47.723027 | orchestrator | Tuesday 17 March 2026 00:35:47 +0000 (0:00:00.129) 0:07:58.739 ********* 2026-03-17 00:35:47.723037 | orchestrator | 2026-03-17 00:35:47.723048 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-17 00:35:47.723071 | orchestrator | Tuesday 17 March 2026 00:35:47 +0000 (0:00:00.035) 0:07:58.775 ********* 2026-03-17 00:36:15.141773 | orchestrator | 2026-03-17 00:36:15.141906 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-17 00:36:15.141964 | orchestrator | Tuesday 17 March 2026 00:35:47 +0000 (0:00:00.036) 0:07:58.811 ********* 2026-03-17 00:36:15.141981 | orchestrator | 2026-03-17 00:36:15.141998 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-17 00:36:15.142015 | orchestrator | Tuesday 17 March 2026 00:35:47 +0000 (0:00:00.038) 0:07:58.850 ********* 2026-03-17 00:36:15.142120 | orchestrator | 2026-03-17 00:36:15.142139 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-17 00:36:15.142175 | orchestrator | Tuesday 17 March 2026 00:35:47 +0000 (0:00:00.035) 0:07:58.886 ********* 2026-03-17 00:36:15.142208 | orchestrator | 2026-03-17 00:36:15.142226 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-17 00:36:15.142244 | orchestrator | Tuesday 17 March 2026 00:35:47 +0000 (0:00:00.037) 0:07:58.923 ********* 2026-03-17 00:36:15.142263 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:36:15.142282 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:36:15.142300 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:36:15.142317 | orchestrator | 2026-03-17 00:36:15.142334 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-03-17 00:36:15.142352 | orchestrator | Tuesday 17 March 2026 00:35:49 +0000 (0:00:01.586) 0:08:00.510 ********* 2026-03-17 00:36:15.142370 | orchestrator | changed: [testbed-manager] 2026-03-17 00:36:15.142389 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:36:15.142407 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:36:15.142426 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:36:15.142443 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:36:15.142460 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:36:15.142478 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:36:15.142494 | orchestrator | 2026-03-17 00:36:15.142511 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-03-17 00:36:15.142528 | orchestrator | Tuesday 17 March 2026 00:35:50 +0000 (0:00:01.220) 0:08:01.730 ********* 2026-03-17 00:36:15.142544 | orchestrator | changed: [testbed-manager] 2026-03-17 00:36:15.142561 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:36:15.142576 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:36:15.142592 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:36:15.142607 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:36:15.142623 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:36:15.142637 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:36:15.142685 | orchestrator | 2026-03-17 00:36:15.142703 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-03-17 00:36:15.142721 | orchestrator | Tuesday 17 March 2026 00:35:51 +0000 (0:00:01.199) 0:08:02.930 ********* 2026-03-17 00:36:15.142738 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:36:15.142755 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:36:15.142773 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:36:15.142791 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:36:15.142810 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:36:15.142828 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:36:15.142847 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:36:15.142865 | orchestrator | 2026-03-17 00:36:15.142885 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-03-17 00:36:15.142904 | orchestrator | Tuesday 17 March 2026 00:35:54 +0000 (0:00:02.644) 0:08:05.574 ********* 2026-03-17 00:36:15.142923 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:36:15.142940 | orchestrator | 2026-03-17 00:36:15.142957 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-03-17 00:36:15.142975 | orchestrator | Tuesday 17 March 2026 00:35:54 +0000 (0:00:00.088) 0:08:05.663 ********* 2026-03-17 00:36:15.142992 | orchestrator | ok: [testbed-manager] 2026-03-17 00:36:15.143009 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:36:15.143027 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:36:15.143045 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:36:15.143065 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:36:15.143102 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:36:15.143121 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:36:15.143140 | orchestrator | 2026-03-17 00:36:15.143158 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-03-17 00:36:15.143194 | orchestrator | Tuesday 17 March 2026 00:35:55 +0000 (0:00:01.162) 0:08:06.826 ********* 2026-03-17 00:36:15.143215 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:36:15.143233 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:36:15.143251 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:36:15.143267 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:36:15.143285 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:36:15.143303 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:36:15.143321 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:36:15.143339 | orchestrator | 2026-03-17 00:36:15.143356 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-03-17 00:36:15.143375 | orchestrator | Tuesday 17 March 2026 00:35:56 +0000 (0:00:00.508) 0:08:07.334 ********* 2026-03-17 00:36:15.143395 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:36:15.143416 | orchestrator | 2026-03-17 00:36:15.143434 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-03-17 00:36:15.143452 | orchestrator | Tuesday 17 March 2026 00:35:57 +0000 (0:00:00.846) 0:08:08.180 ********* 2026-03-17 00:36:15.143471 | orchestrator | ok: [testbed-manager] 2026-03-17 00:36:15.143490 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:36:15.143509 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:36:15.143526 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:36:15.143544 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:36:15.143562 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:36:15.143580 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:36:15.143598 | orchestrator | 2026-03-17 00:36:15.143616 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-03-17 00:36:15.143633 | orchestrator | Tuesday 17 March 2026 00:35:58 +0000 (0:00:01.010) 0:08:09.191 ********* 2026-03-17 00:36:15.143724 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-03-17 00:36:15.143775 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-03-17 00:36:15.143795 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-03-17 00:36:15.143812 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-03-17 00:36:15.143830 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-03-17 00:36:15.143848 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-03-17 00:36:15.143865 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-03-17 00:36:15.143884 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-03-17 00:36:15.143900 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-03-17 00:36:15.143917 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-03-17 00:36:15.143933 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-03-17 00:36:15.143951 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-03-17 00:36:15.143969 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-03-17 00:36:15.143987 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-03-17 00:36:15.144004 | orchestrator | 2026-03-17 00:36:15.144022 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-03-17 00:36:15.144038 | orchestrator | Tuesday 17 March 2026 00:36:00 +0000 (0:00:02.612) 0:08:11.804 ********* 2026-03-17 00:36:15.144055 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:36:15.144073 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:36:15.144090 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:36:15.144107 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:36:15.144140 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:36:15.144157 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:36:15.144174 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:36:15.144191 | orchestrator | 2026-03-17 00:36:15.144208 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-03-17 00:36:15.144225 | orchestrator | Tuesday 17 March 2026 00:36:01 +0000 (0:00:00.462) 0:08:12.266 ********* 2026-03-17 00:36:15.144246 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:36:15.144268 | orchestrator | 2026-03-17 00:36:15.144285 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-03-17 00:36:15.144303 | orchestrator | Tuesday 17 March 2026 00:36:02 +0000 (0:00:00.932) 0:08:13.199 ********* 2026-03-17 00:36:15.144321 | orchestrator | ok: [testbed-manager] 2026-03-17 00:36:15.144339 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:36:15.144356 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:36:15.144373 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:36:15.144391 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:36:15.144409 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:36:15.144427 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:36:15.144444 | orchestrator | 2026-03-17 00:36:15.144461 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-03-17 00:36:15.144479 | orchestrator | Tuesday 17 March 2026 00:36:02 +0000 (0:00:00.860) 0:08:14.059 ********* 2026-03-17 00:36:15.144497 | orchestrator | ok: [testbed-manager] 2026-03-17 00:36:15.144515 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:36:15.144531 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:36:15.144548 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:36:15.144564 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:36:15.144580 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:36:15.144596 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:36:15.144612 | orchestrator | 2026-03-17 00:36:15.144629 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-03-17 00:36:15.144645 | orchestrator | Tuesday 17 March 2026 00:36:03 +0000 (0:00:00.807) 0:08:14.867 ********* 2026-03-17 00:36:15.144732 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:36:15.144751 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:36:15.144770 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:36:15.144803 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:36:15.144822 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:36:15.144840 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:36:15.144858 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:36:15.144876 | orchestrator | 2026-03-17 00:36:15.144896 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-03-17 00:36:15.144915 | orchestrator | Tuesday 17 March 2026 00:36:04 +0000 (0:00:00.485) 0:08:15.353 ********* 2026-03-17 00:36:15.144933 | orchestrator | ok: [testbed-manager] 2026-03-17 00:36:15.144952 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:36:15.144970 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:36:15.144989 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:36:15.145008 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:36:15.145027 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:36:15.145046 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:36:15.145064 | orchestrator | 2026-03-17 00:36:15.145083 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-03-17 00:36:15.145102 | orchestrator | Tuesday 17 March 2026 00:36:05 +0000 (0:00:01.656) 0:08:17.010 ********* 2026-03-17 00:36:15.145120 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:36:15.145138 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:36:15.145155 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:36:15.145174 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:36:15.145193 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:36:15.145227 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:36:15.145246 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:36:15.145266 | orchestrator | 2026-03-17 00:36:15.145285 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-03-17 00:36:15.145305 | orchestrator | Tuesday 17 March 2026 00:36:06 +0000 (0:00:00.632) 0:08:17.642 ********* 2026-03-17 00:36:15.145324 | orchestrator | ok: [testbed-manager] 2026-03-17 00:36:15.145343 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:36:15.145363 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:36:15.145380 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:36:15.145397 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:36:15.145415 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:36:15.145451 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:36:48.043714 | orchestrator | 2026-03-17 00:36:48.043812 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-03-17 00:36:48.043826 | orchestrator | Tuesday 17 March 2026 00:36:15 +0000 (0:00:08.626) 0:08:26.268 ********* 2026-03-17 00:36:48.043836 | orchestrator | ok: [testbed-manager] 2026-03-17 00:36:48.043846 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:36:48.043856 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:36:48.043865 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:36:48.043873 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:36:48.043882 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:36:48.043891 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:36:48.043900 | orchestrator | 2026-03-17 00:36:48.043909 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-03-17 00:36:48.043918 | orchestrator | Tuesday 17 March 2026 00:36:16 +0000 (0:00:01.349) 0:08:27.617 ********* 2026-03-17 00:36:48.043927 | orchestrator | ok: [testbed-manager] 2026-03-17 00:36:48.043935 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:36:48.043944 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:36:48.043953 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:36:48.043961 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:36:48.043970 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:36:48.043979 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:36:48.043988 | orchestrator | 2026-03-17 00:36:48.043996 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-03-17 00:36:48.044005 | orchestrator | Tuesday 17 March 2026 00:36:18 +0000 (0:00:01.782) 0:08:29.400 ********* 2026-03-17 00:36:48.044014 | orchestrator | ok: [testbed-manager] 2026-03-17 00:36:48.044023 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:36:48.044031 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:36:48.044040 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:36:48.044049 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:36:48.044058 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:36:48.044066 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:36:48.044075 | orchestrator | 2026-03-17 00:36:48.044084 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-17 00:36:48.044092 | orchestrator | Tuesday 17 March 2026 00:36:20 +0000 (0:00:01.829) 0:08:31.230 ********* 2026-03-17 00:36:48.044101 | orchestrator | ok: [testbed-manager] 2026-03-17 00:36:48.044110 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:36:48.044119 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:36:48.044127 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:36:48.044136 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:36:48.044145 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:36:48.044153 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:36:48.044162 | orchestrator | 2026-03-17 00:36:48.044172 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-17 00:36:48.044183 | orchestrator | Tuesday 17 March 2026 00:36:21 +0000 (0:00:00.860) 0:08:32.090 ********* 2026-03-17 00:36:48.044193 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:36:48.044203 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:36:48.044213 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:36:48.044243 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:36:48.044253 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:36:48.044263 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:36:48.044273 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:36:48.044283 | orchestrator | 2026-03-17 00:36:48.044293 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-03-17 00:36:48.044304 | orchestrator | Tuesday 17 March 2026 00:36:21 +0000 (0:00:00.778) 0:08:32.869 ********* 2026-03-17 00:36:48.044314 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:36:48.044324 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:36:48.044334 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:36:48.044345 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:36:48.044355 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:36:48.044364 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:36:48.044373 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:36:48.044381 | orchestrator | 2026-03-17 00:36:48.044390 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-03-17 00:36:48.044398 | orchestrator | Tuesday 17 March 2026 00:36:22 +0000 (0:00:00.636) 0:08:33.506 ********* 2026-03-17 00:36:48.044407 | orchestrator | ok: [testbed-manager] 2026-03-17 00:36:48.044416 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:36:48.044425 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:36:48.044433 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:36:48.044442 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:36:48.044450 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:36:48.044459 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:36:48.044467 | orchestrator | 2026-03-17 00:36:48.044476 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-03-17 00:36:48.044485 | orchestrator | Tuesday 17 March 2026 00:36:22 +0000 (0:00:00.489) 0:08:33.995 ********* 2026-03-17 00:36:48.044493 | orchestrator | ok: [testbed-manager] 2026-03-17 00:36:48.044502 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:36:48.044510 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:36:48.044519 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:36:48.044527 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:36:48.044535 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:36:48.044544 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:36:48.044553 | orchestrator | 2026-03-17 00:36:48.044561 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-03-17 00:36:48.044570 | orchestrator | Tuesday 17 March 2026 00:36:23 +0000 (0:00:00.513) 0:08:34.509 ********* 2026-03-17 00:36:48.044579 | orchestrator | ok: [testbed-manager] 2026-03-17 00:36:48.044587 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:36:48.044596 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:36:48.044604 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:36:48.044671 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:36:48.044681 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:36:48.044690 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:36:48.044698 | orchestrator | 2026-03-17 00:36:48.044707 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-03-17 00:36:48.044716 | orchestrator | Tuesday 17 March 2026 00:36:23 +0000 (0:00:00.525) 0:08:35.034 ********* 2026-03-17 00:36:48.044725 | orchestrator | ok: [testbed-manager] 2026-03-17 00:36:48.044733 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:36:48.044742 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:36:48.044750 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:36:48.044759 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:36:48.044767 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:36:48.044791 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:36:48.044801 | orchestrator | 2026-03-17 00:36:48.044825 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-03-17 00:36:48.044835 | orchestrator | Tuesday 17 March 2026 00:36:29 +0000 (0:00:05.522) 0:08:40.556 ********* 2026-03-17 00:36:48.044844 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:36:48.044853 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:36:48.044869 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:36:48.044878 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:36:48.044886 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:36:48.044895 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:36:48.044903 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:36:48.044912 | orchestrator | 2026-03-17 00:36:48.044921 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-03-17 00:36:48.044929 | orchestrator | Tuesday 17 March 2026 00:36:30 +0000 (0:00:00.661) 0:08:41.218 ********* 2026-03-17 00:36:48.044939 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:36:48.044950 | orchestrator | 2026-03-17 00:36:48.044959 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-03-17 00:36:48.044968 | orchestrator | Tuesday 17 March 2026 00:36:30 +0000 (0:00:00.780) 0:08:41.998 ********* 2026-03-17 00:36:48.044977 | orchestrator | ok: [testbed-manager] 2026-03-17 00:36:48.044985 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:36:48.044994 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:36:48.045003 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:36:48.045011 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:36:48.045020 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:36:48.045028 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:36:48.045037 | orchestrator | 2026-03-17 00:36:48.045045 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-03-17 00:36:48.045054 | orchestrator | Tuesday 17 March 2026 00:36:33 +0000 (0:00:02.164) 0:08:44.162 ********* 2026-03-17 00:36:48.045063 | orchestrator | ok: [testbed-manager] 2026-03-17 00:36:48.045071 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:36:48.045080 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:36:48.045088 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:36:48.045097 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:36:48.045105 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:36:48.045113 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:36:48.045122 | orchestrator | 2026-03-17 00:36:48.045131 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-03-17 00:36:48.045139 | orchestrator | Tuesday 17 March 2026 00:36:34 +0000 (0:00:01.310) 0:08:45.473 ********* 2026-03-17 00:36:48.045148 | orchestrator | ok: [testbed-manager] 2026-03-17 00:36:48.045157 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:36:48.045165 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:36:48.045173 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:36:48.045194 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:36:48.045203 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:36:48.045221 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:36:48.045230 | orchestrator | 2026-03-17 00:36:48.045239 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-03-17 00:36:48.045248 | orchestrator | Tuesday 17 March 2026 00:36:35 +0000 (0:00:00.856) 0:08:46.330 ********* 2026-03-17 00:36:48.045257 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-17 00:36:48.045267 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-17 00:36:48.045276 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-17 00:36:48.045285 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-17 00:36:48.045298 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-17 00:36:48.045308 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-17 00:36:48.045322 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-17 00:36:48.045331 | orchestrator | 2026-03-17 00:36:48.045340 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-03-17 00:36:48.045349 | orchestrator | Tuesday 17 March 2026 00:36:36 +0000 (0:00:01.729) 0:08:48.060 ********* 2026-03-17 00:36:48.045358 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:36:48.045367 | orchestrator | 2026-03-17 00:36:48.045376 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-03-17 00:36:48.045385 | orchestrator | Tuesday 17 March 2026 00:36:37 +0000 (0:00:00.960) 0:08:49.020 ********* 2026-03-17 00:36:48.045393 | orchestrator | changed: [testbed-manager] 2026-03-17 00:36:48.045402 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:36:48.045411 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:36:48.045420 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:36:48.045428 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:36:48.045437 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:36:48.045446 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:36:48.045454 | orchestrator | 2026-03-17 00:36:48.045468 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-03-17 00:37:19.251750 | orchestrator | Tuesday 17 March 2026 00:36:48 +0000 (0:00:10.079) 0:08:59.100 ********* 2026-03-17 00:37:19.251931 | orchestrator | ok: [testbed-manager] 2026-03-17 00:37:19.251962 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:37:19.251982 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:37:19.252003 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:37:19.252024 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:37:19.252037 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:37:19.252048 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:37:19.252059 | orchestrator | 2026-03-17 00:37:19.252071 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-03-17 00:37:19.252082 | orchestrator | Tuesday 17 March 2026 00:36:49 +0000 (0:00:01.701) 0:09:00.801 ********* 2026-03-17 00:37:19.252100 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:37:19.252119 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:37:19.252137 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:37:19.252155 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:37:19.252175 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:37:19.252196 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:37:19.252216 | orchestrator | 2026-03-17 00:37:19.252232 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-03-17 00:37:19.252245 | orchestrator | Tuesday 17 March 2026 00:36:51 +0000 (0:00:01.592) 0:09:02.393 ********* 2026-03-17 00:37:19.252258 | orchestrator | changed: [testbed-manager] 2026-03-17 00:37:19.252271 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:37:19.252285 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:37:19.252298 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:37:19.252310 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:37:19.252322 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:37:19.252334 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:37:19.252346 | orchestrator | 2026-03-17 00:37:19.252358 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-03-17 00:37:19.252371 | orchestrator | 2026-03-17 00:37:19.252384 | orchestrator | TASK [Include hardening role] ************************************************** 2026-03-17 00:37:19.252396 | orchestrator | Tuesday 17 March 2026 00:36:52 +0000 (0:00:01.293) 0:09:03.687 ********* 2026-03-17 00:37:19.252408 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:37:19.252420 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:37:19.252459 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:37:19.252472 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:37:19.252487 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:37:19.252507 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:37:19.252525 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:37:19.252543 | orchestrator | 2026-03-17 00:37:19.252630 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-03-17 00:37:19.252652 | orchestrator | 2026-03-17 00:37:19.252668 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-03-17 00:37:19.252679 | orchestrator | Tuesday 17 March 2026 00:36:53 +0000 (0:00:00.484) 0:09:04.172 ********* 2026-03-17 00:37:19.252690 | orchestrator | changed: [testbed-manager] 2026-03-17 00:37:19.252701 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:37:19.252712 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:37:19.252722 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:37:19.252733 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:37:19.252744 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:37:19.252754 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:37:19.252765 | orchestrator | 2026-03-17 00:37:19.252776 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-03-17 00:37:19.252787 | orchestrator | Tuesday 17 March 2026 00:36:54 +0000 (0:00:01.354) 0:09:05.526 ********* 2026-03-17 00:37:19.252798 | orchestrator | ok: [testbed-manager] 2026-03-17 00:37:19.252808 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:37:19.252819 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:37:19.252830 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:37:19.252840 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:37:19.252851 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:37:19.252862 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:37:19.252872 | orchestrator | 2026-03-17 00:37:19.252883 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-03-17 00:37:19.252894 | orchestrator | Tuesday 17 March 2026 00:36:56 +0000 (0:00:01.598) 0:09:07.124 ********* 2026-03-17 00:37:19.252905 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:37:19.252916 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:37:19.252941 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:37:19.252953 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:37:19.252963 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:37:19.252974 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:37:19.252985 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:37:19.252995 | orchestrator | 2026-03-17 00:37:19.253006 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-03-17 00:37:19.253017 | orchestrator | Tuesday 17 March 2026 00:36:56 +0000 (0:00:00.488) 0:09:07.613 ********* 2026-03-17 00:37:19.253028 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:37:19.253041 | orchestrator | 2026-03-17 00:37:19.253052 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-03-17 00:37:19.253063 | orchestrator | Tuesday 17 March 2026 00:36:57 +0000 (0:00:00.773) 0:09:08.386 ********* 2026-03-17 00:37:19.253075 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:37:19.253089 | orchestrator | 2026-03-17 00:37:19.253100 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-03-17 00:37:19.253111 | orchestrator | Tuesday 17 March 2026 00:36:58 +0000 (0:00:00.963) 0:09:09.350 ********* 2026-03-17 00:37:19.253121 | orchestrator | changed: [testbed-manager] 2026-03-17 00:37:19.253132 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:37:19.253143 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:37:19.253153 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:37:19.253164 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:37:19.253185 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:37:19.253196 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:37:19.253206 | orchestrator | 2026-03-17 00:37:19.253237 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-03-17 00:37:19.253249 | orchestrator | Tuesday 17 March 2026 00:37:07 +0000 (0:00:09.687) 0:09:19.038 ********* 2026-03-17 00:37:19.253259 | orchestrator | changed: [testbed-manager] 2026-03-17 00:37:19.253270 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:37:19.253281 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:37:19.253291 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:37:19.253302 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:37:19.253313 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:37:19.253323 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:37:19.253334 | orchestrator | 2026-03-17 00:37:19.253345 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-03-17 00:37:19.253356 | orchestrator | Tuesday 17 March 2026 00:37:08 +0000 (0:00:00.837) 0:09:19.876 ********* 2026-03-17 00:37:19.253367 | orchestrator | changed: [testbed-manager] 2026-03-17 00:37:19.253377 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:37:19.253388 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:37:19.253398 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:37:19.253409 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:37:19.253420 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:37:19.253430 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:37:19.253441 | orchestrator | 2026-03-17 00:37:19.253452 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-03-17 00:37:19.253462 | orchestrator | Tuesday 17 March 2026 00:37:10 +0000 (0:00:01.324) 0:09:21.200 ********* 2026-03-17 00:37:19.253473 | orchestrator | changed: [testbed-manager] 2026-03-17 00:37:19.253484 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:37:19.253494 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:37:19.253505 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:37:19.253516 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:37:19.253526 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:37:19.253537 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:37:19.253548 | orchestrator | 2026-03-17 00:37:19.253585 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-03-17 00:37:19.253605 | orchestrator | Tuesday 17 March 2026 00:37:12 +0000 (0:00:01.912) 0:09:23.112 ********* 2026-03-17 00:37:19.253624 | orchestrator | changed: [testbed-manager] 2026-03-17 00:37:19.253644 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:37:19.253662 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:37:19.253681 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:37:19.253693 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:37:19.253703 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:37:19.253714 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:37:19.253725 | orchestrator | 2026-03-17 00:37:19.253735 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-03-17 00:37:19.253746 | orchestrator | Tuesday 17 March 2026 00:37:13 +0000 (0:00:01.359) 0:09:24.472 ********* 2026-03-17 00:37:19.253757 | orchestrator | changed: [testbed-manager] 2026-03-17 00:37:19.253768 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:37:19.253778 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:37:19.253789 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:37:19.253800 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:37:19.253810 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:37:19.253821 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:37:19.253831 | orchestrator | 2026-03-17 00:37:19.253843 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-03-17 00:37:19.253853 | orchestrator | 2026-03-17 00:37:19.253864 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-03-17 00:37:19.253875 | orchestrator | Tuesday 17 March 2026 00:37:14 +0000 (0:00:01.102) 0:09:25.574 ********* 2026-03-17 00:37:19.253896 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:37:19.253907 | orchestrator | 2026-03-17 00:37:19.253917 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-03-17 00:37:19.253928 | orchestrator | Tuesday 17 March 2026 00:37:15 +0000 (0:00:00.967) 0:09:26.542 ********* 2026-03-17 00:37:19.253939 | orchestrator | ok: [testbed-manager] 2026-03-17 00:37:19.253949 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:37:19.253966 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:37:19.253978 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:37:19.253989 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:37:19.253999 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:37:19.254010 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:37:19.254080 | orchestrator | 2026-03-17 00:37:19.254092 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-03-17 00:37:19.254103 | orchestrator | Tuesday 17 March 2026 00:37:16 +0000 (0:00:00.803) 0:09:27.345 ********* 2026-03-17 00:37:19.254114 | orchestrator | changed: [testbed-manager] 2026-03-17 00:37:19.254125 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:37:19.254135 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:37:19.254146 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:37:19.254157 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:37:19.254168 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:37:19.254178 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:37:19.254189 | orchestrator | 2026-03-17 00:37:19.254200 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-03-17 00:37:19.254210 | orchestrator | Tuesday 17 March 2026 00:37:17 +0000 (0:00:01.242) 0:09:28.588 ********* 2026-03-17 00:37:19.254221 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:37:19.254232 | orchestrator | 2026-03-17 00:37:19.254243 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-03-17 00:37:19.254254 | orchestrator | Tuesday 17 March 2026 00:37:18 +0000 (0:00:00.802) 0:09:29.390 ********* 2026-03-17 00:37:19.254265 | orchestrator | ok: [testbed-manager] 2026-03-17 00:37:19.254276 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:37:19.254286 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:37:19.254297 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:37:19.254308 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:37:19.254318 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:37:19.254329 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:37:19.254340 | orchestrator | 2026-03-17 00:37:19.254360 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-03-17 00:37:20.799634 | orchestrator | Tuesday 17 March 2026 00:37:19 +0000 (0:00:00.917) 0:09:30.308 ********* 2026-03-17 00:37:20.799710 | orchestrator | changed: [testbed-manager] 2026-03-17 00:37:20.799719 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:37:20.799726 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:37:20.799732 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:37:20.799738 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:37:20.799744 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:37:20.799749 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:37:20.799754 | orchestrator | 2026-03-17 00:37:20.799761 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:37:20.799767 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-03-17 00:37:20.799775 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-03-17 00:37:20.799781 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-17 00:37:20.799809 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-17 00:37:20.799815 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-03-17 00:37:20.799820 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-03-17 00:37:20.799826 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-03-17 00:37:20.799831 | orchestrator | 2026-03-17 00:37:20.799836 | orchestrator | 2026-03-17 00:37:20.799842 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:37:20.799848 | orchestrator | Tuesday 17 March 2026 00:37:20 +0000 (0:00:01.274) 0:09:31.582 ********* 2026-03-17 00:37:20.799853 | orchestrator | =============================================================================== 2026-03-17 00:37:20.799859 | orchestrator | osism.commons.packages : Download required packages ------------------- 108.51s 2026-03-17 00:37:20.799864 | orchestrator | osism.commons.packages : Install required packages --------------------- 75.83s 2026-03-17 00:37:20.799870 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 34.43s 2026-03-17 00:37:20.799875 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.98s 2026-03-17 00:37:20.799881 | orchestrator | osism.services.docker : Install docker package ------------------------- 12.72s 2026-03-17 00:37:20.799887 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 10.73s 2026-03-17 00:37:20.799893 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 10.49s 2026-03-17 00:37:20.799900 | orchestrator | osism.services.docker : Install docker-cli package --------------------- 10.47s 2026-03-17 00:37:20.799906 | orchestrator | osism.services.docker : Install containerd package --------------------- 10.32s 2026-03-17 00:37:20.799913 | orchestrator | osism.services.lldpd : Install lldpd package --------------------------- 10.08s 2026-03-17 00:37:20.799919 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 9.69s 2026-03-17 00:37:20.799936 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 9.59s 2026-03-17 00:37:20.799943 | orchestrator | osism.services.rng : Install rng package -------------------------------- 9.39s 2026-03-17 00:37:20.799950 | orchestrator | osism.services.docker : Add repository ---------------------------------- 9.06s 2026-03-17 00:37:20.799956 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 8.63s 2026-03-17 00:37:20.799962 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 8.36s 2026-03-17 00:37:20.799968 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 7.69s 2026-03-17 00:37:20.799974 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 7.51s 2026-03-17 00:37:20.799981 | orchestrator | osism.commons.sysctl : Set sysctl parameters on rabbitmq ---------------- 6.72s 2026-03-17 00:37:20.799987 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.52s 2026-03-17 00:37:20.975665 | orchestrator | + osism apply fail2ban 2026-03-17 00:37:32.550645 | orchestrator | 2026-03-17 00:37:32 | INFO  | Prepare task for execution of fail2ban. 2026-03-17 00:37:32.626355 | orchestrator | 2026-03-17 00:37:32 | INFO  | Task 53fad2ec-3615-48bf-9e18-e6859599b9a9 (fail2ban) was prepared for execution. 2026-03-17 00:37:32.626482 | orchestrator | 2026-03-17 00:37:32 | INFO  | It takes a moment until task 53fad2ec-3615-48bf-9e18-e6859599b9a9 (fail2ban) has been started and output is visible here. 2026-03-17 00:37:54.325906 | orchestrator | 2026-03-17 00:37:54.326073 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-03-17 00:37:54.326121 | orchestrator | 2026-03-17 00:37:54.326134 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-03-17 00:37:54.326145 | orchestrator | Tuesday 17 March 2026 00:37:36 +0000 (0:00:00.343) 0:00:00.343 ********* 2026-03-17 00:37:54.326158 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:37:54.326172 | orchestrator | 2026-03-17 00:37:54.326183 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-03-17 00:37:54.326194 | orchestrator | Tuesday 17 March 2026 00:37:37 +0000 (0:00:01.115) 0:00:01.458 ********* 2026-03-17 00:37:54.326205 | orchestrator | changed: [testbed-manager] 2026-03-17 00:37:54.326217 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:37:54.326228 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:37:54.326239 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:37:54.326249 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:37:54.326260 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:37:54.326271 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:37:54.326281 | orchestrator | 2026-03-17 00:37:54.326292 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-03-17 00:37:54.326303 | orchestrator | Tuesday 17 March 2026 00:37:49 +0000 (0:00:12.780) 0:00:14.238 ********* 2026-03-17 00:37:54.326314 | orchestrator | changed: [testbed-manager] 2026-03-17 00:37:54.326325 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:37:54.326335 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:37:54.326346 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:37:54.326357 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:37:54.326367 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:37:54.326378 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:37:54.326389 | orchestrator | 2026-03-17 00:37:54.326400 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-03-17 00:37:54.326411 | orchestrator | Tuesday 17 March 2026 00:37:51 +0000 (0:00:01.518) 0:00:15.757 ********* 2026-03-17 00:37:54.326421 | orchestrator | ok: [testbed-manager] 2026-03-17 00:37:54.326433 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:37:54.326444 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:37:54.326454 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:37:54.326465 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:37:54.326476 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:37:54.326486 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:37:54.326522 | orchestrator | 2026-03-17 00:37:54.326533 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-03-17 00:37:54.326544 | orchestrator | Tuesday 17 March 2026 00:37:52 +0000 (0:00:01.212) 0:00:16.969 ********* 2026-03-17 00:37:54.326555 | orchestrator | changed: [testbed-manager] 2026-03-17 00:37:54.326566 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:37:54.326577 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:37:54.326588 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:37:54.326599 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:37:54.326609 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:37:54.326620 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:37:54.326631 | orchestrator | 2026-03-17 00:37:54.326642 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:37:54.326653 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:37:54.326664 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:37:54.326675 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:37:54.326686 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:37:54.326721 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:37:54.326732 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:37:54.326743 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:37:54.326754 | orchestrator | 2026-03-17 00:37:54.326765 | orchestrator | 2026-03-17 00:37:54.326775 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:37:54.326786 | orchestrator | Tuesday 17 March 2026 00:37:54 +0000 (0:00:01.491) 0:00:18.460 ********* 2026-03-17 00:37:54.326797 | orchestrator | =============================================================================== 2026-03-17 00:37:54.326808 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 12.78s 2026-03-17 00:37:54.326819 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.52s 2026-03-17 00:37:54.326830 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.49s 2026-03-17 00:37:54.326840 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.21s 2026-03-17 00:37:54.326851 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.12s 2026-03-17 00:37:54.436993 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-03-17 00:37:54.437086 | orchestrator | + osism apply network 2026-03-17 00:38:05.589992 | orchestrator | 2026-03-17 00:38:05 | INFO  | Prepare task for execution of network. 2026-03-17 00:38:05.663826 | orchestrator | 2026-03-17 00:38:05 | INFO  | Task 7f4cf562-dec5-4967-9a70-fa9c4180ac1e (network) was prepared for execution. 2026-03-17 00:38:05.663926 | orchestrator | 2026-03-17 00:38:05 | INFO  | It takes a moment until task 7f4cf562-dec5-4967-9a70-fa9c4180ac1e (network) has been started and output is visible here. 2026-03-17 00:38:32.292972 | orchestrator | 2026-03-17 00:38:32.293083 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-03-17 00:38:32.293100 | orchestrator | 2026-03-17 00:38:32.293112 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-03-17 00:38:32.293124 | orchestrator | Tuesday 17 March 2026 00:38:08 +0000 (0:00:00.324) 0:00:00.324 ********* 2026-03-17 00:38:32.293135 | orchestrator | ok: [testbed-manager] 2026-03-17 00:38:32.293147 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:38:32.293158 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:38:32.293169 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:38:32.293179 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:38:32.293190 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:38:32.293200 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:38:32.293211 | orchestrator | 2026-03-17 00:38:32.293222 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-03-17 00:38:32.293233 | orchestrator | Tuesday 17 March 2026 00:38:09 +0000 (0:00:00.503) 0:00:00.827 ********* 2026-03-17 00:38:32.293246 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:38:32.293259 | orchestrator | 2026-03-17 00:38:32.293270 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-03-17 00:38:32.293281 | orchestrator | Tuesday 17 March 2026 00:38:10 +0000 (0:00:01.030) 0:00:01.858 ********* 2026-03-17 00:38:32.293292 | orchestrator | ok: [testbed-manager] 2026-03-17 00:38:32.293303 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:38:32.293313 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:38:32.293324 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:38:32.293335 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:38:32.293378 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:38:32.293406 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:38:32.293429 | orchestrator | 2026-03-17 00:38:32.293512 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-03-17 00:38:32.293530 | orchestrator | Tuesday 17 March 2026 00:38:12 +0000 (0:00:02.342) 0:00:04.201 ********* 2026-03-17 00:38:32.293549 | orchestrator | ok: [testbed-manager] 2026-03-17 00:38:32.293566 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:38:32.293586 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:38:32.293605 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:38:32.293625 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:38:32.293644 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:38:32.293663 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:38:32.293677 | orchestrator | 2026-03-17 00:38:32.293689 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-03-17 00:38:32.293701 | orchestrator | Tuesday 17 March 2026 00:38:14 +0000 (0:00:01.607) 0:00:05.808 ********* 2026-03-17 00:38:32.293714 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-03-17 00:38:32.293727 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-03-17 00:38:32.293739 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-03-17 00:38:32.293752 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-03-17 00:38:32.293764 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-03-17 00:38:32.293776 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-03-17 00:38:32.293788 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-03-17 00:38:32.293800 | orchestrator | 2026-03-17 00:38:32.293812 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-03-17 00:38:32.293824 | orchestrator | Tuesday 17 March 2026 00:38:15 +0000 (0:00:01.054) 0:00:06.863 ********* 2026-03-17 00:38:32.293837 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-17 00:38:32.293850 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-17 00:38:32.293862 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-17 00:38:32.293873 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-17 00:38:32.293883 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-17 00:38:32.293894 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-17 00:38:32.293905 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-17 00:38:32.293916 | orchestrator | 2026-03-17 00:38:32.293927 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-03-17 00:38:32.293937 | orchestrator | Tuesday 17 March 2026 00:38:18 +0000 (0:00:03.131) 0:00:09.994 ********* 2026-03-17 00:38:32.293948 | orchestrator | changed: [testbed-manager] 2026-03-17 00:38:32.293959 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:38:32.293969 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:38:32.293980 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:38:32.293990 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:38:32.294001 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:38:32.294012 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:38:32.294088 | orchestrator | 2026-03-17 00:38:32.294127 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-03-17 00:38:32.294139 | orchestrator | Tuesday 17 March 2026 00:38:20 +0000 (0:00:01.670) 0:00:11.664 ********* 2026-03-17 00:38:32.294150 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-17 00:38:32.294161 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-17 00:38:32.294171 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-17 00:38:32.294182 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-17 00:38:32.294192 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-17 00:38:32.294203 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-17 00:38:32.294214 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-17 00:38:32.294224 | orchestrator | 2026-03-17 00:38:32.294235 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-03-17 00:38:32.294246 | orchestrator | Tuesday 17 March 2026 00:38:22 +0000 (0:00:01.870) 0:00:13.535 ********* 2026-03-17 00:38:32.294268 | orchestrator | ok: [testbed-manager] 2026-03-17 00:38:32.294279 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:38:32.294290 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:38:32.294309 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:38:32.294335 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:38:32.294356 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:38:32.294373 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:38:32.294390 | orchestrator | 2026-03-17 00:38:32.294408 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-03-17 00:38:32.294478 | orchestrator | Tuesday 17 March 2026 00:38:23 +0000 (0:00:00.888) 0:00:14.424 ********* 2026-03-17 00:38:32.294501 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:38:32.294522 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:38:32.294540 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:38:32.294560 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:38:32.294578 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:38:32.294598 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:38:32.294617 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:38:32.294637 | orchestrator | 2026-03-17 00:38:32.294658 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-03-17 00:38:32.294678 | orchestrator | Tuesday 17 March 2026 00:38:23 +0000 (0:00:00.684) 0:00:15.109 ********* 2026-03-17 00:38:32.294695 | orchestrator | ok: [testbed-manager] 2026-03-17 00:38:32.294706 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:38:32.294717 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:38:32.294727 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:38:32.294738 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:38:32.294748 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:38:32.294759 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:38:32.294769 | orchestrator | 2026-03-17 00:38:32.294780 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-03-17 00:38:32.294791 | orchestrator | Tuesday 17 March 2026 00:38:26 +0000 (0:00:02.328) 0:00:17.437 ********* 2026-03-17 00:38:32.294801 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:38:32.294812 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:38:32.294822 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:38:32.294833 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:38:32.294844 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:38:32.294854 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:38:32.294866 | orchestrator | changed: [testbed-manager] => (item={'src': '/opt/configuration/network/iptables.sh', 'dest': 'routable.d/iptables.sh'}) 2026-03-17 00:38:32.294878 | orchestrator | 2026-03-17 00:38:32.294889 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-03-17 00:38:32.294899 | orchestrator | Tuesday 17 March 2026 00:38:26 +0000 (0:00:00.789) 0:00:18.226 ********* 2026-03-17 00:38:32.294910 | orchestrator | ok: [testbed-manager] 2026-03-17 00:38:32.294920 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:38:32.294931 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:38:32.294942 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:38:32.294952 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:38:32.294963 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:38:32.294973 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:38:32.294984 | orchestrator | 2026-03-17 00:38:32.294995 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-03-17 00:38:32.295005 | orchestrator | Tuesday 17 March 2026 00:38:28 +0000 (0:00:01.392) 0:00:19.619 ********* 2026-03-17 00:38:32.295017 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:38:32.295029 | orchestrator | 2026-03-17 00:38:32.295040 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-03-17 00:38:32.295051 | orchestrator | Tuesday 17 March 2026 00:38:29 +0000 (0:00:01.116) 0:00:20.735 ********* 2026-03-17 00:38:32.295073 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:38:32.295084 | orchestrator | ok: [testbed-manager] 2026-03-17 00:38:32.295094 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:38:32.295105 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:38:32.295116 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:38:32.295126 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:38:32.295137 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:38:32.295147 | orchestrator | 2026-03-17 00:38:32.295158 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-03-17 00:38:32.295169 | orchestrator | Tuesday 17 March 2026 00:38:30 +0000 (0:00:01.135) 0:00:21.870 ********* 2026-03-17 00:38:32.295179 | orchestrator | ok: [testbed-manager] 2026-03-17 00:38:32.295190 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:38:32.295208 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:38:32.295219 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:38:32.295229 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:38:32.295240 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:38:32.295250 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:38:32.295261 | orchestrator | 2026-03-17 00:38:32.295272 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-03-17 00:38:32.295282 | orchestrator | Tuesday 17 March 2026 00:38:31 +0000 (0:00:00.830) 0:00:22.701 ********* 2026-03-17 00:38:32.295293 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-03-17 00:38:32.295304 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-03-17 00:38:32.295314 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-03-17 00:38:32.295325 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-03-17 00:38:32.295335 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-17 00:38:32.295346 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-03-17 00:38:32.295356 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-17 00:38:32.295367 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-03-17 00:38:32.295377 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-17 00:38:32.295388 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-03-17 00:38:32.295398 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-17 00:38:32.295409 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-17 00:38:32.295420 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-17 00:38:32.295430 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-17 00:38:32.295575 | orchestrator | 2026-03-17 00:38:32.295629 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-03-17 00:38:47.364771 | orchestrator | Tuesday 17 March 2026 00:38:32 +0000 (0:00:01.006) 0:00:23.708 ********* 2026-03-17 00:38:47.364888 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:38:47.364906 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:38:47.364919 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:38:47.364930 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:38:47.364941 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:38:47.364952 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:38:47.364964 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:38:47.364975 | orchestrator | 2026-03-17 00:38:47.364987 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-03-17 00:38:47.364999 | orchestrator | Tuesday 17 March 2026 00:38:32 +0000 (0:00:00.709) 0:00:24.417 ********* 2026-03-17 00:38:47.365012 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-4, testbed-node-5, testbed-node-2, testbed-node-3 2026-03-17 00:38:47.365050 | orchestrator | 2026-03-17 00:38:47.365061 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-03-17 00:38:47.365093 | orchestrator | Tuesday 17 March 2026 00:38:37 +0000 (0:00:04.128) 0:00:28.546 ********* 2026-03-17 00:38:47.365106 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-03-17 00:38:47.365118 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.112.5/20']}}) 2026-03-17 00:38:47.365131 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-03-17 00:38:47.365143 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-03-17 00:38:47.365154 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': []}}) 2026-03-17 00:38:47.365165 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-03-17 00:38:47.365191 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.11/20']}}) 2026-03-17 00:38:47.365203 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-03-17 00:38:47.365215 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.128.5/20']}}) 2026-03-17 00:38:47.365269 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.10/20']}}) 2026-03-17 00:38:47.365282 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.13/20']}}) 2026-03-17 00:38:47.365313 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': ['192.168.128.15/20']}}) 2026-03-17 00:38:47.365327 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.12/20']}}) 2026-03-17 00:38:47.365349 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.14/20']}}) 2026-03-17 00:38:47.365362 | orchestrator | 2026-03-17 00:38:47.365375 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-03-17 00:38:47.365387 | orchestrator | Tuesday 17 March 2026 00:38:42 +0000 (0:00:05.017) 0:00:33.563 ********* 2026-03-17 00:38:47.365401 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.112.5/20']}}) 2026-03-17 00:38:47.365414 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.128.5/20']}}) 2026-03-17 00:38:47.365453 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-03-17 00:38:47.365467 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-03-17 00:38:47.365480 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-03-17 00:38:47.365492 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-03-17 00:38:47.365504 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-03-17 00:38:47.365523 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.10/20']}}) 2026-03-17 00:38:47.365536 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': []}}) 2026-03-17 00:38:47.365548 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.13/20']}}) 2026-03-17 00:38:47.365561 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.11/20']}}) 2026-03-17 00:38:47.365574 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.14/20']}}) 2026-03-17 00:38:47.365602 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.12/20']}}) 2026-03-17 00:38:58.953620 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': ['192.168.128.15/20']}}) 2026-03-17 00:38:58.953712 | orchestrator | 2026-03-17 00:38:58.953722 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-03-17 00:38:58.953730 | orchestrator | Tuesday 17 March 2026 00:38:47 +0000 (0:00:05.487) 0:00:39.051 ********* 2026-03-17 00:38:58.953739 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:38:58.953746 | orchestrator | 2026-03-17 00:38:58.953753 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-03-17 00:38:58.953759 | orchestrator | Tuesday 17 March 2026 00:38:48 +0000 (0:00:01.055) 0:00:40.107 ********* 2026-03-17 00:38:58.953766 | orchestrator | ok: [testbed-manager] 2026-03-17 00:38:58.953773 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:38:58.953780 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:38:58.953786 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:38:58.953792 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:38:58.953798 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:38:58.953805 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:38:58.953811 | orchestrator | 2026-03-17 00:38:58.953817 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-03-17 00:38:58.953824 | orchestrator | Tuesday 17 March 2026 00:38:49 +0000 (0:00:01.013) 0:00:41.120 ********* 2026-03-17 00:38:58.953830 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-17 00:38:58.953838 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-17 00:38:58.953844 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-17 00:38:58.953850 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-17 00:38:58.953856 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-17 00:38:58.953862 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-17 00:38:58.953869 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-17 00:38:58.953875 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-17 00:38:58.953881 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:38:58.953888 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-17 00:38:58.953894 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-17 00:38:58.953901 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-17 00:38:58.953907 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-17 00:38:58.953913 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:38:58.953919 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-17 00:38:58.953938 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-17 00:38:58.953945 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-17 00:38:58.953951 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-17 00:38:58.953975 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:38:58.953982 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-17 00:38:58.953988 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-17 00:38:58.953995 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-17 00:38:58.954001 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-17 00:38:58.954007 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:38:58.954060 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-17 00:38:58.954067 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-17 00:38:58.954074 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-17 00:38:58.954080 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:38:58.954086 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-17 00:38:58.954114 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:38:58.954121 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-17 00:38:58.954127 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-17 00:38:58.954134 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-17 00:38:58.954140 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-17 00:38:58.954146 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:38:58.954152 | orchestrator | 2026-03-17 00:38:58.954159 | orchestrator | TASK [osism.commons.network : Include network extra init] ********************** 2026-03-17 00:38:58.954179 | orchestrator | Tuesday 17 March 2026 00:38:50 +0000 (0:00:00.650) 0:00:41.770 ********* 2026-03-17 00:38:58.954187 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/network-extra-init.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:38:58.954194 | orchestrator | 2026-03-17 00:38:58.954201 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init script] **************** 2026-03-17 00:38:58.954209 | orchestrator | Tuesday 17 March 2026 00:38:51 +0000 (0:00:01.083) 0:00:42.854 ********* 2026-03-17 00:38:58.954216 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:38:58.954223 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:38:58.954230 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:38:58.954237 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:38:58.954244 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:38:58.954251 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:38:58.954258 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:38:58.954265 | orchestrator | 2026-03-17 00:38:58.954273 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init systemd service] ******* 2026-03-17 00:38:58.954280 | orchestrator | Tuesday 17 March 2026 00:38:52 +0000 (0:00:00.631) 0:00:43.485 ********* 2026-03-17 00:38:58.954288 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:38:58.954295 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:38:58.954302 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:38:58.954309 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:38:58.954316 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:38:58.954323 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:38:58.954330 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:38:58.954337 | orchestrator | 2026-03-17 00:38:58.954344 | orchestrator | TASK [osism.commons.network : Enable and start network-extra-init service] ***** 2026-03-17 00:38:58.954352 | orchestrator | Tuesday 17 March 2026 00:38:52 +0000 (0:00:00.573) 0:00:44.058 ********* 2026-03-17 00:38:58.954359 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:38:58.954372 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:38:58.954378 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:38:58.954384 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:38:58.954390 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:38:58.954397 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:38:58.954403 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:38:58.954424 | orchestrator | 2026-03-17 00:38:58.954431 | orchestrator | TASK [osism.commons.network : Disable and stop network-extra-init service] ***** 2026-03-17 00:38:58.954437 | orchestrator | Tuesday 17 March 2026 00:38:53 +0000 (0:00:00.624) 0:00:44.683 ********* 2026-03-17 00:38:58.954443 | orchestrator | ok: [testbed-manager] 2026-03-17 00:38:58.954449 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:38:58.954456 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:38:58.954462 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:38:58.954468 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:38:58.954474 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:38:58.954480 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:38:58.954487 | orchestrator | 2026-03-17 00:38:58.954493 | orchestrator | TASK [osism.commons.network : Remove network-extra-init systemd service] ******* 2026-03-17 00:38:58.954499 | orchestrator | Tuesday 17 March 2026 00:38:54 +0000 (0:00:01.418) 0:00:46.101 ********* 2026-03-17 00:38:58.954505 | orchestrator | ok: [testbed-manager] 2026-03-17 00:38:58.954511 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:38:58.954517 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:38:58.954523 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:38:58.954529 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:38:58.954535 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:38:58.954542 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:38:58.954548 | orchestrator | 2026-03-17 00:38:58.954554 | orchestrator | TASK [osism.commons.network : Remove network-extra-init script] **************** 2026-03-17 00:38:58.954565 | orchestrator | Tuesday 17 March 2026 00:38:55 +0000 (0:00:01.033) 0:00:47.135 ********* 2026-03-17 00:38:58.954572 | orchestrator | ok: [testbed-manager] 2026-03-17 00:38:58.954578 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:38:58.954584 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:38:58.954590 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:38:58.954596 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:38:58.954602 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:38:58.954608 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:38:58.954614 | orchestrator | 2026-03-17 00:38:58.954620 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-03-17 00:38:58.954627 | orchestrator | Tuesday 17 March 2026 00:38:57 +0000 (0:00:01.966) 0:00:49.101 ********* 2026-03-17 00:38:58.954633 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:38:58.954639 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:38:58.954645 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:38:58.954651 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:38:58.954658 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:38:58.954664 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:38:58.954670 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:38:58.954676 | orchestrator | 2026-03-17 00:38:58.954682 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-03-17 00:38:58.954688 | orchestrator | Tuesday 17 March 2026 00:38:58 +0000 (0:00:00.589) 0:00:49.691 ********* 2026-03-17 00:38:58.954694 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:38:58.954701 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:38:58.954707 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:38:58.954713 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:38:58.954719 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:38:58.954725 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:38:58.954731 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:38:58.954737 | orchestrator | 2026-03-17 00:38:58.954743 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:38:58.954750 | orchestrator | testbed-manager : ok=25  changed=5  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-17 00:38:58.954763 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-17 00:38:58.954774 | orchestrator | testbed-node-1 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-17 00:38:59.163378 | orchestrator | testbed-node-2 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-17 00:38:59.163553 | orchestrator | testbed-node-3 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-17 00:38:59.163570 | orchestrator | testbed-node-4 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-17 00:38:59.163582 | orchestrator | testbed-node-5 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-17 00:38:59.163594 | orchestrator | 2026-03-17 00:38:59.163605 | orchestrator | 2026-03-17 00:38:59.163617 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:38:59.163629 | orchestrator | Tuesday 17 March 2026 00:38:58 +0000 (0:00:00.679) 0:00:50.370 ********* 2026-03-17 00:38:59.163641 | orchestrator | =============================================================================== 2026-03-17 00:38:59.163651 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.49s 2026-03-17 00:38:59.163662 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.02s 2026-03-17 00:38:59.163673 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.13s 2026-03-17 00:38:59.163684 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.13s 2026-03-17 00:38:59.163695 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.34s 2026-03-17 00:38:59.163706 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.33s 2026-03-17 00:38:59.163717 | orchestrator | osism.commons.network : Remove network-extra-init script ---------------- 1.97s 2026-03-17 00:38:59.163727 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.87s 2026-03-17 00:38:59.163738 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.67s 2026-03-17 00:38:59.163749 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.61s 2026-03-17 00:38:59.163760 | orchestrator | osism.commons.network : Disable and stop network-extra-init service ----- 1.42s 2026-03-17 00:38:59.163771 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.39s 2026-03-17 00:38:59.163781 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.14s 2026-03-17 00:38:59.163804 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.12s 2026-03-17 00:38:59.163815 | orchestrator | osism.commons.network : Include network extra init ---------------------- 1.08s 2026-03-17 00:38:59.163826 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.06s 2026-03-17 00:38:59.163837 | orchestrator | osism.commons.network : Create required directories --------------------- 1.05s 2026-03-17 00:38:59.163848 | orchestrator | osism.commons.network : Remove network-extra-init systemd service ------- 1.03s 2026-03-17 00:38:59.163858 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.03s 2026-03-17 00:38:59.163870 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.01s 2026-03-17 00:38:59.293789 | orchestrator | + osism apply wireguard 2026-03-17 00:39:10.621340 | orchestrator | 2026-03-17 00:39:10 | INFO  | Prepare task for execution of wireguard. 2026-03-17 00:39:10.695058 | orchestrator | 2026-03-17 00:39:10 | INFO  | Task 0123c8c0-1bb3-41c2-a560-2a8607f95f1d (wireguard) was prepared for execution. 2026-03-17 00:39:10.695216 | orchestrator | 2026-03-17 00:39:10 | INFO  | It takes a moment until task 0123c8c0-1bb3-41c2-a560-2a8607f95f1d (wireguard) has been started and output is visible here. 2026-03-17 00:39:28.445289 | orchestrator | 2026-03-17 00:39:28.445515 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-03-17 00:39:28.445538 | orchestrator | 2026-03-17 00:39:28.445551 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-03-17 00:39:28.445563 | orchestrator | Tuesday 17 March 2026 00:39:13 +0000 (0:00:00.256) 0:00:00.256 ********* 2026-03-17 00:39:28.445575 | orchestrator | ok: [testbed-manager] 2026-03-17 00:39:28.445587 | orchestrator | 2026-03-17 00:39:28.445598 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-03-17 00:39:28.445609 | orchestrator | Tuesday 17 March 2026 00:39:15 +0000 (0:00:01.512) 0:00:01.768 ********* 2026-03-17 00:39:28.445620 | orchestrator | changed: [testbed-manager] 2026-03-17 00:39:28.445631 | orchestrator | 2026-03-17 00:39:28.445642 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-03-17 00:39:28.445653 | orchestrator | Tuesday 17 March 2026 00:39:20 +0000 (0:00:05.635) 0:00:07.404 ********* 2026-03-17 00:39:28.445664 | orchestrator | changed: [testbed-manager] 2026-03-17 00:39:28.445675 | orchestrator | 2026-03-17 00:39:28.445685 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-03-17 00:39:28.445696 | orchestrator | Tuesday 17 March 2026 00:39:21 +0000 (0:00:00.547) 0:00:07.951 ********* 2026-03-17 00:39:28.445707 | orchestrator | changed: [testbed-manager] 2026-03-17 00:39:28.445718 | orchestrator | 2026-03-17 00:39:28.445728 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-03-17 00:39:28.445739 | orchestrator | Tuesday 17 March 2026 00:39:21 +0000 (0:00:00.414) 0:00:08.366 ********* 2026-03-17 00:39:28.445750 | orchestrator | ok: [testbed-manager] 2026-03-17 00:39:28.445761 | orchestrator | 2026-03-17 00:39:28.445771 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-03-17 00:39:28.445782 | orchestrator | Tuesday 17 March 2026 00:39:22 +0000 (0:00:00.546) 0:00:08.912 ********* 2026-03-17 00:39:28.445793 | orchestrator | ok: [testbed-manager] 2026-03-17 00:39:28.445803 | orchestrator | 2026-03-17 00:39:28.445814 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-03-17 00:39:28.445826 | orchestrator | Tuesday 17 March 2026 00:39:22 +0000 (0:00:00.435) 0:00:09.348 ********* 2026-03-17 00:39:28.445838 | orchestrator | ok: [testbed-manager] 2026-03-17 00:39:28.445851 | orchestrator | 2026-03-17 00:39:28.445863 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-03-17 00:39:28.445876 | orchestrator | Tuesday 17 March 2026 00:39:23 +0000 (0:00:00.436) 0:00:09.784 ********* 2026-03-17 00:39:28.445888 | orchestrator | changed: [testbed-manager] 2026-03-17 00:39:28.445901 | orchestrator | 2026-03-17 00:39:28.445913 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-03-17 00:39:28.445926 | orchestrator | Tuesday 17 March 2026 00:39:24 +0000 (0:00:01.184) 0:00:10.969 ********* 2026-03-17 00:39:28.445939 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-17 00:39:28.445951 | orchestrator | changed: [testbed-manager] 2026-03-17 00:39:28.445979 | orchestrator | 2026-03-17 00:39:28.446000 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-03-17 00:39:28.446107 | orchestrator | Tuesday 17 March 2026 00:39:25 +0000 (0:00:00.897) 0:00:11.867 ********* 2026-03-17 00:39:28.446146 | orchestrator | changed: [testbed-manager] 2026-03-17 00:39:28.446158 | orchestrator | 2026-03-17 00:39:28.446168 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-03-17 00:39:28.446179 | orchestrator | Tuesday 17 March 2026 00:39:27 +0000 (0:00:01.929) 0:00:13.796 ********* 2026-03-17 00:39:28.446190 | orchestrator | changed: [testbed-manager] 2026-03-17 00:39:28.446201 | orchestrator | 2026-03-17 00:39:28.446211 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:39:28.446249 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:39:28.446262 | orchestrator | 2026-03-17 00:39:28.446273 | orchestrator | 2026-03-17 00:39:28.446283 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:39:28.446294 | orchestrator | Tuesday 17 March 2026 00:39:28 +0000 (0:00:00.909) 0:00:14.706 ********* 2026-03-17 00:39:28.446305 | orchestrator | =============================================================================== 2026-03-17 00:39:28.446315 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 5.64s 2026-03-17 00:39:28.446326 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.93s 2026-03-17 00:39:28.446337 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.51s 2026-03-17 00:39:28.446347 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.18s 2026-03-17 00:39:28.446358 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.91s 2026-03-17 00:39:28.446391 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.90s 2026-03-17 00:39:28.446404 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.55s 2026-03-17 00:39:28.446415 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.55s 2026-03-17 00:39:28.446426 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.44s 2026-03-17 00:39:28.446442 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.44s 2026-03-17 00:39:28.446453 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.41s 2026-03-17 00:39:28.629703 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-03-17 00:39:28.663622 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-03-17 00:39:28.663713 | orchestrator | Dload Upload Total Spent Left Speed 2026-03-17 00:39:28.736307 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 14 100 14 0 0 191 0 --:--:-- --:--:-- --:--:-- 191 100 14 100 14 0 0 191 0 --:--:-- --:--:-- --:--:-- 191 2026-03-17 00:39:28.752066 | orchestrator | + osism apply --environment custom workarounds 2026-03-17 00:39:29.944695 | orchestrator | 2026-03-17 00:39:29 | INFO  | Trying to run play workarounds in environment custom 2026-03-17 00:39:39.979973 | orchestrator | 2026-03-17 00:39:39 | INFO  | Prepare task for execution of workarounds. 2026-03-17 00:39:40.064394 | orchestrator | 2026-03-17 00:39:40 | INFO  | Task 3644a1e0-c88b-43cf-bab0-0b618159f9f4 (workarounds) was prepared for execution. 2026-03-17 00:39:40.064465 | orchestrator | 2026-03-17 00:39:40 | INFO  | It takes a moment until task 3644a1e0-c88b-43cf-bab0-0b618159f9f4 (workarounds) has been started and output is visible here. 2026-03-17 00:40:03.928061 | orchestrator | 2026-03-17 00:40:03.928169 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-17 00:40:03.928186 | orchestrator | 2026-03-17 00:40:03.928198 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-03-17 00:40:03.928210 | orchestrator | Tuesday 17 March 2026 00:39:43 +0000 (0:00:00.193) 0:00:00.193 ********* 2026-03-17 00:40:03.928221 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-03-17 00:40:03.928232 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-03-17 00:40:03.928243 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-03-17 00:40:03.928254 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-03-17 00:40:03.928265 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-03-17 00:40:03.928276 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-03-17 00:40:03.928307 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-03-17 00:40:03.928318 | orchestrator | 2026-03-17 00:40:03.928376 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-03-17 00:40:03.928395 | orchestrator | 2026-03-17 00:40:03.928415 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-03-17 00:40:03.928434 | orchestrator | Tuesday 17 March 2026 00:39:43 +0000 (0:00:00.701) 0:00:00.895 ********* 2026-03-17 00:40:03.928455 | orchestrator | ok: [testbed-manager] 2026-03-17 00:40:03.928468 | orchestrator | 2026-03-17 00:40:03.928479 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-03-17 00:40:03.928489 | orchestrator | 2026-03-17 00:40:03.928500 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-03-17 00:40:03.928511 | orchestrator | Tuesday 17 March 2026 00:39:46 +0000 (0:00:02.597) 0:00:03.492 ********* 2026-03-17 00:40:03.928521 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:40:03.928532 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:40:03.928543 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:40:03.928553 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:40:03.928564 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:40:03.928574 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:40:03.928585 | orchestrator | 2026-03-17 00:40:03.928598 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-03-17 00:40:03.928610 | orchestrator | 2026-03-17 00:40:03.928623 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-03-17 00:40:03.928635 | orchestrator | Tuesday 17 March 2026 00:39:48 +0000 (0:00:02.404) 0:00:05.896 ********* 2026-03-17 00:40:03.928649 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-17 00:40:03.928663 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-17 00:40:03.928675 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-17 00:40:03.928688 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-17 00:40:03.928701 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-17 00:40:03.928714 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-17 00:40:03.928725 | orchestrator | 2026-03-17 00:40:03.928736 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-03-17 00:40:03.928747 | orchestrator | Tuesday 17 March 2026 00:39:50 +0000 (0:00:01.418) 0:00:07.315 ********* 2026-03-17 00:40:03.928759 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:40:03.928770 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:40:03.928780 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:40:03.928803 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:40:03.928814 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:40:03.928825 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:40:03.928835 | orchestrator | 2026-03-17 00:40:03.928846 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-03-17 00:40:03.928857 | orchestrator | Tuesday 17 March 2026 00:39:53 +0000 (0:00:03.094) 0:00:10.410 ********* 2026-03-17 00:40:03.928875 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:40:03.928887 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:40:03.928897 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:40:03.928908 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:40:03.928918 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:40:03.928929 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:40:03.928939 | orchestrator | 2026-03-17 00:40:03.928950 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-03-17 00:40:03.928960 | orchestrator | 2026-03-17 00:40:03.928971 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-03-17 00:40:03.928991 | orchestrator | Tuesday 17 March 2026 00:39:53 +0000 (0:00:00.482) 0:00:10.892 ********* 2026-03-17 00:40:03.929002 | orchestrator | changed: [testbed-manager] 2026-03-17 00:40:03.929012 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:40:03.929023 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:40:03.929033 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:40:03.929044 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:40:03.929055 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:40:03.929065 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:40:03.929076 | orchestrator | 2026-03-17 00:40:03.929087 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-03-17 00:40:03.929097 | orchestrator | Tuesday 17 March 2026 00:39:55 +0000 (0:00:01.638) 0:00:12.530 ********* 2026-03-17 00:40:03.929108 | orchestrator | changed: [testbed-manager] 2026-03-17 00:40:03.929119 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:40:03.929129 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:40:03.929140 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:40:03.929151 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:40:03.929161 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:40:03.929190 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:40:03.929202 | orchestrator | 2026-03-17 00:40:03.929213 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-03-17 00:40:03.929223 | orchestrator | Tuesday 17 March 2026 00:39:56 +0000 (0:00:01.428) 0:00:13.958 ********* 2026-03-17 00:40:03.929234 | orchestrator | ok: [testbed-manager] 2026-03-17 00:40:03.929245 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:40:03.929255 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:40:03.929266 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:40:03.929276 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:40:03.929287 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:40:03.929297 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:40:03.929308 | orchestrator | 2026-03-17 00:40:03.929372 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-03-17 00:40:03.929386 | orchestrator | Tuesday 17 March 2026 00:39:58 +0000 (0:00:01.603) 0:00:15.561 ********* 2026-03-17 00:40:03.929397 | orchestrator | changed: [testbed-manager] 2026-03-17 00:40:03.929408 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:40:03.929419 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:40:03.929430 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:40:03.929440 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:40:03.929451 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:40:03.929461 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:40:03.929472 | orchestrator | 2026-03-17 00:40:03.929482 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-03-17 00:40:03.929493 | orchestrator | Tuesday 17 March 2026 00:40:00 +0000 (0:00:01.755) 0:00:17.317 ********* 2026-03-17 00:40:03.929504 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:40:03.929514 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:40:03.929525 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:40:03.929535 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:40:03.929546 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:40:03.929556 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:40:03.929567 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:40:03.929577 | orchestrator | 2026-03-17 00:40:03.929588 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-03-17 00:40:03.929600 | orchestrator | 2026-03-17 00:40:03.929620 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-03-17 00:40:03.929648 | orchestrator | Tuesday 17 March 2026 00:40:01 +0000 (0:00:00.738) 0:00:18.056 ********* 2026-03-17 00:40:03.929667 | orchestrator | ok: [testbed-manager] 2026-03-17 00:40:03.929686 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:40:03.929704 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:40:03.929723 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:40:03.929739 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:40:03.929769 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:40:03.929788 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:40:03.929804 | orchestrator | 2026-03-17 00:40:03.929823 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:40:03.929841 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-17 00:40:03.929858 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:40:03.929876 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:40:03.929895 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:40:03.929913 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:40:03.929931 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:40:03.929948 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:40:03.929967 | orchestrator | 2026-03-17 00:40:03.929985 | orchestrator | 2026-03-17 00:40:03.930012 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:40:03.930128 | orchestrator | Tuesday 17 March 2026 00:40:03 +0000 (0:00:02.828) 0:00:20.884 ********* 2026-03-17 00:40:03.930149 | orchestrator | =============================================================================== 2026-03-17 00:40:03.930169 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.09s 2026-03-17 00:40:03.930188 | orchestrator | Install python3-docker -------------------------------------------------- 2.83s 2026-03-17 00:40:03.930207 | orchestrator | Apply netplan configuration --------------------------------------------- 2.60s 2026-03-17 00:40:03.930226 | orchestrator | Apply netplan configuration --------------------------------------------- 2.40s 2026-03-17 00:40:03.930246 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.76s 2026-03-17 00:40:03.930264 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.64s 2026-03-17 00:40:03.930280 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.60s 2026-03-17 00:40:03.930291 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.43s 2026-03-17 00:40:03.930301 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.42s 2026-03-17 00:40:03.930312 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.74s 2026-03-17 00:40:03.930390 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.70s 2026-03-17 00:40:03.930420 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.48s 2026-03-17 00:40:04.361174 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-03-17 00:40:15.676271 | orchestrator | 2026-03-17 00:40:15 | INFO  | Prepare task for execution of reboot. 2026-03-17 00:40:15.764380 | orchestrator | 2026-03-17 00:40:15 | INFO  | Task 75f8eb9d-2ee4-40f3-b614-440a5b0f704e (reboot) was prepared for execution. 2026-03-17 00:40:15.764481 | orchestrator | 2026-03-17 00:40:15 | INFO  | It takes a moment until task 75f8eb9d-2ee4-40f3-b614-440a5b0f704e (reboot) has been started and output is visible here. 2026-03-17 00:40:26.684844 | orchestrator | 2026-03-17 00:40:26.684943 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-17 00:40:26.684956 | orchestrator | 2026-03-17 00:40:26.684989 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-17 00:40:26.684998 | orchestrator | Tuesday 17 March 2026 00:40:18 +0000 (0:00:00.238) 0:00:00.238 ********* 2026-03-17 00:40:26.685006 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:40:26.685016 | orchestrator | 2026-03-17 00:40:26.685023 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-17 00:40:26.685030 | orchestrator | Tuesday 17 March 2026 00:40:19 +0000 (0:00:00.140) 0:00:00.379 ********* 2026-03-17 00:40:26.685037 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:40:26.685045 | orchestrator | 2026-03-17 00:40:26.685052 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-17 00:40:26.685059 | orchestrator | Tuesday 17 March 2026 00:40:20 +0000 (0:00:01.246) 0:00:01.625 ********* 2026-03-17 00:40:26.685066 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:40:26.685074 | orchestrator | 2026-03-17 00:40:26.685081 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-17 00:40:26.685089 | orchestrator | 2026-03-17 00:40:26.685178 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-17 00:40:26.685188 | orchestrator | Tuesday 17 March 2026 00:40:20 +0000 (0:00:00.111) 0:00:01.737 ********* 2026-03-17 00:40:26.685195 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:40:26.685244 | orchestrator | 2026-03-17 00:40:26.685254 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-17 00:40:26.685262 | orchestrator | Tuesday 17 March 2026 00:40:20 +0000 (0:00:00.094) 0:00:01.832 ********* 2026-03-17 00:40:26.685269 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:40:26.685275 | orchestrator | 2026-03-17 00:40:26.685282 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-17 00:40:26.685289 | orchestrator | Tuesday 17 March 2026 00:40:21 +0000 (0:00:01.025) 0:00:02.857 ********* 2026-03-17 00:40:26.685318 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:40:26.685325 | orchestrator | 2026-03-17 00:40:26.685332 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-17 00:40:26.685339 | orchestrator | 2026-03-17 00:40:26.685347 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-17 00:40:26.685353 | orchestrator | Tuesday 17 March 2026 00:40:21 +0000 (0:00:00.107) 0:00:02.965 ********* 2026-03-17 00:40:26.685360 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:40:26.685366 | orchestrator | 2026-03-17 00:40:26.685373 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-17 00:40:26.685380 | orchestrator | Tuesday 17 March 2026 00:40:21 +0000 (0:00:00.085) 0:00:03.050 ********* 2026-03-17 00:40:26.685386 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:40:26.685393 | orchestrator | 2026-03-17 00:40:26.685401 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-17 00:40:26.685408 | orchestrator | Tuesday 17 March 2026 00:40:22 +0000 (0:00:01.019) 0:00:04.070 ********* 2026-03-17 00:40:26.685415 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:40:26.685423 | orchestrator | 2026-03-17 00:40:26.685431 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-17 00:40:26.685438 | orchestrator | 2026-03-17 00:40:26.685445 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-17 00:40:26.685453 | orchestrator | Tuesday 17 March 2026 00:40:22 +0000 (0:00:00.097) 0:00:04.168 ********* 2026-03-17 00:40:26.685460 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:40:26.685468 | orchestrator | 2026-03-17 00:40:26.685475 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-17 00:40:26.685498 | orchestrator | Tuesday 17 March 2026 00:40:22 +0000 (0:00:00.091) 0:00:04.259 ********* 2026-03-17 00:40:26.685505 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:40:26.685513 | orchestrator | 2026-03-17 00:40:26.685520 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-17 00:40:26.685527 | orchestrator | Tuesday 17 March 2026 00:40:24 +0000 (0:00:01.075) 0:00:05.335 ********* 2026-03-17 00:40:26.685544 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:40:26.685551 | orchestrator | 2026-03-17 00:40:26.685558 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-17 00:40:26.685566 | orchestrator | 2026-03-17 00:40:26.685573 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-17 00:40:26.685580 | orchestrator | Tuesday 17 March 2026 00:40:24 +0000 (0:00:00.093) 0:00:05.428 ********* 2026-03-17 00:40:26.685587 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:40:26.685594 | orchestrator | 2026-03-17 00:40:26.685600 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-17 00:40:26.685608 | orchestrator | Tuesday 17 March 2026 00:40:24 +0000 (0:00:00.086) 0:00:05.515 ********* 2026-03-17 00:40:26.685614 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:40:26.685621 | orchestrator | 2026-03-17 00:40:26.685628 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-17 00:40:26.685635 | orchestrator | Tuesday 17 March 2026 00:40:25 +0000 (0:00:01.039) 0:00:06.554 ********* 2026-03-17 00:40:26.685641 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:40:26.685647 | orchestrator | 2026-03-17 00:40:26.685654 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-17 00:40:26.685660 | orchestrator | 2026-03-17 00:40:26.685667 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-17 00:40:26.685673 | orchestrator | Tuesday 17 March 2026 00:40:25 +0000 (0:00:00.106) 0:00:06.661 ********* 2026-03-17 00:40:26.685679 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:40:26.685686 | orchestrator | 2026-03-17 00:40:26.685693 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-17 00:40:26.685699 | orchestrator | Tuesday 17 March 2026 00:40:25 +0000 (0:00:00.139) 0:00:06.801 ********* 2026-03-17 00:40:26.685705 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:40:26.685712 | orchestrator | 2026-03-17 00:40:26.685718 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-17 00:40:26.685725 | orchestrator | Tuesday 17 March 2026 00:40:26 +0000 (0:00:01.028) 0:00:07.829 ********* 2026-03-17 00:40:26.685751 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:40:26.685758 | orchestrator | 2026-03-17 00:40:26.685764 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:40:26.685804 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:40:26.686151 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:40:26.686160 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:40:26.686167 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:40:26.686173 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:40:26.686180 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:40:26.686186 | orchestrator | 2026-03-17 00:40:26.686192 | orchestrator | 2026-03-17 00:40:26.686198 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:40:26.686205 | orchestrator | Tuesday 17 March 2026 00:40:26 +0000 (0:00:00.020) 0:00:07.850 ********* 2026-03-17 00:40:26.686212 | orchestrator | =============================================================================== 2026-03-17 00:40:26.686219 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 6.44s 2026-03-17 00:40:26.686225 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.64s 2026-03-17 00:40:26.686247 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.54s 2026-03-17 00:40:26.812938 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-03-17 00:40:38.054572 | orchestrator | 2026-03-17 00:40:38 | INFO  | Prepare task for execution of wait-for-connection. 2026-03-17 00:40:38.126935 | orchestrator | 2026-03-17 00:40:38 | INFO  | Task 4e94935b-ab39-4fca-8bb6-633097bc01e0 (wait-for-connection) was prepared for execution. 2026-03-17 00:40:38.126987 | orchestrator | 2026-03-17 00:40:38 | INFO  | It takes a moment until task 4e94935b-ab39-4fca-8bb6-633097bc01e0 (wait-for-connection) has been started and output is visible here. 2026-03-17 00:40:53.081326 | orchestrator | 2026-03-17 00:40:53.081457 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-03-17 00:40:53.081475 | orchestrator | 2026-03-17 00:40:53.081488 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-03-17 00:40:53.081551 | orchestrator | Tuesday 17 March 2026 00:40:41 +0000 (0:00:00.271) 0:00:00.271 ********* 2026-03-17 00:40:53.081565 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:40:53.081577 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:40:53.081589 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:40:53.081616 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:40:53.081628 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:40:53.081639 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:40:53.081650 | orchestrator | 2026-03-17 00:40:53.081661 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:40:53.081673 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:40:53.081686 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:40:53.081697 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:40:53.081708 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:40:53.081719 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:40:53.081730 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:40:53.081741 | orchestrator | 2026-03-17 00:40:53.081754 | orchestrator | 2026-03-17 00:40:53.081768 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:40:53.081781 | orchestrator | Tuesday 17 March 2026 00:40:52 +0000 (0:00:11.588) 0:00:11.860 ********* 2026-03-17 00:40:53.081793 | orchestrator | =============================================================================== 2026-03-17 00:40:53.081806 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.59s 2026-03-17 00:40:53.255522 | orchestrator | + osism apply hddtemp 2026-03-17 00:41:04.606240 | orchestrator | 2026-03-17 00:41:04 | INFO  | Prepare task for execution of hddtemp. 2026-03-17 00:41:04.679290 | orchestrator | 2026-03-17 00:41:04 | INFO  | Task 2aadaf06-11da-4962-beaf-031597f79af1 (hddtemp) was prepared for execution. 2026-03-17 00:41:04.679399 | orchestrator | 2026-03-17 00:41:04 | INFO  | It takes a moment until task 2aadaf06-11da-4962-beaf-031597f79af1 (hddtemp) has been started and output is visible here. 2026-03-17 00:41:32.349886 | orchestrator | 2026-03-17 00:41:32.349993 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-03-17 00:41:32.350010 | orchestrator | 2026-03-17 00:41:32.350082 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-03-17 00:41:32.350094 | orchestrator | Tuesday 17 March 2026 00:41:07 +0000 (0:00:00.243) 0:00:00.243 ********* 2026-03-17 00:41:32.350132 | orchestrator | ok: [testbed-manager] 2026-03-17 00:41:32.350144 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:41:32.350155 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:41:32.350166 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:41:32.350177 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:41:32.350188 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:41:32.350199 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:41:32.350210 | orchestrator | 2026-03-17 00:41:32.350277 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-03-17 00:41:32.350289 | orchestrator | Tuesday 17 March 2026 00:41:08 +0000 (0:00:00.500) 0:00:00.744 ********* 2026-03-17 00:41:32.350302 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:41:32.350316 | orchestrator | 2026-03-17 00:41:32.350328 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-03-17 00:41:32.350339 | orchestrator | Tuesday 17 March 2026 00:41:08 +0000 (0:00:00.840) 0:00:01.585 ********* 2026-03-17 00:41:32.350350 | orchestrator | ok: [testbed-manager] 2026-03-17 00:41:32.350361 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:41:32.350372 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:41:32.350383 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:41:32.350393 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:41:32.350404 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:41:32.350415 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:41:32.350429 | orchestrator | 2026-03-17 00:41:32.350447 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-03-17 00:41:32.350465 | orchestrator | Tuesday 17 March 2026 00:41:11 +0000 (0:00:02.479) 0:00:04.064 ********* 2026-03-17 00:41:32.350483 | orchestrator | changed: [testbed-manager] 2026-03-17 00:41:32.350504 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:41:32.350522 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:41:32.350536 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:41:32.350547 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:41:32.350559 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:41:32.350578 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:41:32.350596 | orchestrator | 2026-03-17 00:41:32.350612 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-03-17 00:41:32.350631 | orchestrator | Tuesday 17 March 2026 00:41:12 +0000 (0:00:00.926) 0:00:04.990 ********* 2026-03-17 00:41:32.350648 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:41:32.350666 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:41:32.350684 | orchestrator | ok: [testbed-manager] 2026-03-17 00:41:32.350704 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:41:32.350723 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:41:32.350741 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:41:32.350760 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:41:32.350778 | orchestrator | 2026-03-17 00:41:32.350797 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-03-17 00:41:32.350809 | orchestrator | Tuesday 17 March 2026 00:41:13 +0000 (0:00:01.316) 0:00:06.307 ********* 2026-03-17 00:41:32.350820 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:41:32.350831 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:41:32.350855 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:41:32.350867 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:41:32.350878 | orchestrator | changed: [testbed-manager] 2026-03-17 00:41:32.350888 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:41:32.350899 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:41:32.350910 | orchestrator | 2026-03-17 00:41:32.350921 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-03-17 00:41:32.350932 | orchestrator | Tuesday 17 March 2026 00:41:14 +0000 (0:00:00.534) 0:00:06.841 ********* 2026-03-17 00:41:32.350942 | orchestrator | changed: [testbed-manager] 2026-03-17 00:41:32.350966 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:41:32.350977 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:41:32.350989 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:41:32.350999 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:41:32.351010 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:41:32.351021 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:41:32.351031 | orchestrator | 2026-03-17 00:41:32.351042 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-03-17 00:41:32.351053 | orchestrator | Tuesday 17 March 2026 00:41:29 +0000 (0:00:14.857) 0:00:21.699 ********* 2026-03-17 00:41:32.351065 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:41:32.351076 | orchestrator | 2026-03-17 00:41:32.351087 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-03-17 00:41:32.351098 | orchestrator | Tuesday 17 March 2026 00:41:30 +0000 (0:00:01.149) 0:00:22.848 ********* 2026-03-17 00:41:32.351108 | orchestrator | changed: [testbed-manager] 2026-03-17 00:41:32.351119 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:41:32.351130 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:41:32.351140 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:41:32.351151 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:41:32.351164 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:41:32.351183 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:41:32.351202 | orchestrator | 2026-03-17 00:41:32.351246 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:41:32.351259 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:41:32.351289 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-17 00:41:32.351301 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-17 00:41:32.351313 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-17 00:41:32.351324 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-17 00:41:32.351335 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-17 00:41:32.351346 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-17 00:41:32.351357 | orchestrator | 2026-03-17 00:41:32.351368 | orchestrator | 2026-03-17 00:41:32.351379 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:41:32.351390 | orchestrator | Tuesday 17 March 2026 00:41:32 +0000 (0:00:01.871) 0:00:24.719 ********* 2026-03-17 00:41:32.351401 | orchestrator | =============================================================================== 2026-03-17 00:41:32.351412 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 14.86s 2026-03-17 00:41:32.351423 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.48s 2026-03-17 00:41:32.351434 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.87s 2026-03-17 00:41:32.351445 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.32s 2026-03-17 00:41:32.351456 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.15s 2026-03-17 00:41:32.351467 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 0.93s 2026-03-17 00:41:32.351486 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 0.84s 2026-03-17 00:41:32.351497 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.53s 2026-03-17 00:41:32.351508 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.50s 2026-03-17 00:41:32.543119 | orchestrator | ++ semver latest 7.1.1 2026-03-17 00:41:32.596412 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-17 00:41:32.596523 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-17 00:41:32.596549 | orchestrator | + sudo systemctl restart manager.service 2026-03-17 00:42:12.212785 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-17 00:42:12.212889 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-03-17 00:42:12.212904 | orchestrator | + local max_attempts=60 2026-03-17 00:42:12.212918 | orchestrator | + local name=ceph-ansible 2026-03-17 00:42:12.212929 | orchestrator | + local attempt_num=1 2026-03-17 00:42:12.212941 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-17 00:42:12.244436 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-17 00:42:12.244511 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-17 00:42:12.244523 | orchestrator | + sleep 5 2026-03-17 00:42:17.249997 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-17 00:42:17.273155 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-17 00:42:17.273325 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-17 00:42:17.273338 | orchestrator | + sleep 5 2026-03-17 00:42:22.275333 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-17 00:42:22.310331 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-17 00:42:22.310444 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-17 00:42:22.310461 | orchestrator | + sleep 5 2026-03-17 00:42:27.313614 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-17 00:42:27.347044 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-17 00:42:27.347171 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-17 00:42:27.347198 | orchestrator | + sleep 5 2026-03-17 00:42:32.351195 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-17 00:42:32.387635 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-17 00:42:32.387684 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-17 00:42:32.387689 | orchestrator | + sleep 5 2026-03-17 00:42:37.392672 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-17 00:42:37.433133 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-17 00:42:37.433248 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-17 00:42:37.433260 | orchestrator | + sleep 5 2026-03-17 00:42:42.438631 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-17 00:42:42.479510 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-17 00:42:42.479595 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-17 00:42:42.479609 | orchestrator | + sleep 5 2026-03-17 00:42:47.484739 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-17 00:42:47.528017 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-17 00:42:47.528108 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-17 00:42:47.528123 | orchestrator | + sleep 5 2026-03-17 00:42:52.530790 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-17 00:42:52.566089 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-17 00:42:52.566228 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-17 00:42:52.566243 | orchestrator | + sleep 5 2026-03-17 00:42:57.570917 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-17 00:42:57.607877 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-17 00:42:57.607995 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-17 00:42:57.608012 | orchestrator | + sleep 5 2026-03-17 00:43:02.611342 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-17 00:43:02.650257 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-17 00:43:02.650329 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-17 00:43:02.650338 | orchestrator | + sleep 5 2026-03-17 00:43:07.654347 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-17 00:43:07.690274 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-17 00:43:07.690439 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-17 00:43:07.690466 | orchestrator | + sleep 5 2026-03-17 00:43:12.694851 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-17 00:43:12.725878 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-17 00:43:12.725977 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-17 00:43:12.725991 | orchestrator | + sleep 5 2026-03-17 00:43:17.729673 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-17 00:43:17.765557 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-17 00:43:17.765645 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-03-17 00:43:17.765661 | orchestrator | + local max_attempts=60 2026-03-17 00:43:17.765675 | orchestrator | + local name=kolla-ansible 2026-03-17 00:43:17.765687 | orchestrator | + local attempt_num=1 2026-03-17 00:43:17.766389 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-03-17 00:43:17.793127 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-17 00:43:17.793307 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-03-17 00:43:17.793329 | orchestrator | + local max_attempts=60 2026-03-17 00:43:17.793341 | orchestrator | + local name=osism-ansible 2026-03-17 00:43:17.793354 | orchestrator | + local attempt_num=1 2026-03-17 00:43:17.793377 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-03-17 00:43:17.818755 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-17 00:43:17.818849 | orchestrator | + [[ true == \t\r\u\e ]] 2026-03-17 00:43:17.818865 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-03-17 00:43:17.967008 | orchestrator | ARA in ceph-ansible already disabled. 2026-03-17 00:43:18.406964 | orchestrator | ARA in osism-kubernetes already disabled. 2026-03-17 00:43:18.407657 | orchestrator | + osism apply gather-facts 2026-03-17 00:43:29.863958 | orchestrator | 2026-03-17 00:43:29 | INFO  | Prepare task for execution of gather-facts. 2026-03-17 00:43:29.929586 | orchestrator | 2026-03-17 00:43:29 | INFO  | Task bd5b0d65-9156-4698-9f69-6e7cab347e38 (gather-facts) was prepared for execution. 2026-03-17 00:43:29.929680 | orchestrator | 2026-03-17 00:43:29 | INFO  | It takes a moment until task bd5b0d65-9156-4698-9f69-6e7cab347e38 (gather-facts) has been started and output is visible here. 2026-03-17 00:43:33.487917 | orchestrator | [WARNING]: Invalid characters were found in group names but not replaced, use 2026-03-17 00:43:33.488025 | orchestrator | -vvvv to see details 2026-03-17 00:43:33.488041 | orchestrator | 2026-03-17 00:43:33.488054 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-17 00:43:33.488066 | orchestrator | 2026-03-17 00:43:33.488148 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-17 00:43:33.488164 | orchestrator | fatal: [testbed-manager]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"192.168.16.5\". Make sure this host can be reached over ssh: [Errno 32] Broken pipe. [Errno 32] Broken pipe", "unreachable": true} 2026-03-17 00:43:33.488178 | orchestrator | ...ignoring 2026-03-17 00:43:33.488193 | orchestrator | fatal: [testbed-node-4]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"192.168.16.14\". Make sure this host can be reached over ssh: Warning: Permanently added '192.168.16.14' (ED25519) to the list of known hosts.\r\nno such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.14: Permission denied (publickey).\r\n", "unreachable": true} 2026-03-17 00:43:33.488207 | orchestrator | ...ignoring 2026-03-17 00:43:33.488260 | orchestrator | fatal: [testbed-node-2]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"192.168.16.12\". Make sure this host can be reached over ssh: Warning: Permanently added '192.168.16.12' (ED25519) to the list of known hosts.\r\nno such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.12: Permission denied (publickey).\r\n", "unreachable": true} 2026-03-17 00:43:33.488291 | orchestrator | ...ignoring 2026-03-17 00:43:33.488311 | orchestrator | fatal: [testbed-node-1]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"192.168.16.11\". Make sure this host can be reached over ssh: Warning: Permanently added '192.168.16.11' (ED25519) to the list of known hosts.\r\nno such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.11: Permission denied (publickey).\r\n", "unreachable": true} 2026-03-17 00:43:33.488361 | orchestrator | ...ignoring 2026-03-17 00:43:33.488382 | orchestrator | fatal: [testbed-node-3]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"192.168.16.13\". Make sure this host can be reached over ssh: Warning: Permanently added '192.168.16.13' (ED25519) to the list of known hosts.\r\nno such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.13: Permission denied (publickey).\r\n", "unreachable": true} 2026-03-17 00:43:33.488402 | orchestrator | ...ignoring 2026-03-17 00:43:33.488423 | orchestrator | fatal: [testbed-node-0]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"192.168.16.10\". Make sure this host can be reached over ssh: Warning: Permanently added '192.168.16.10' (ED25519) to the list of known hosts.\r\nno such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.10: Permission denied (publickey).\r\n", "unreachable": true} 2026-03-17 00:43:33.488442 | orchestrator | ...ignoring 2026-03-17 00:43:33.488463 | orchestrator | fatal: [testbed-node-5]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"192.168.16.15\". Make sure this host can be reached over ssh: Warning: Permanently added '192.168.16.15' (ED25519) to the list of known hosts.\r\nno such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.15: Permission denied (publickey).\r\n", "unreachable": true} 2026-03-17 00:43:33.488477 | orchestrator | ...ignoring 2026-03-17 00:43:33.488489 | orchestrator | 2026-03-17 00:43:33.488503 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-17 00:43:33.488516 | orchestrator | 2026-03-17 00:43:33.488527 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-17 00:43:33.488540 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:43:33.488552 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:43:33.488564 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:43:33.488576 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:43:33.488589 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:43:33.488599 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:43:33.488610 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:43:33.488620 | orchestrator | 2026-03-17 00:43:33.488631 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:43:33.488643 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-03-17 00:43:33.488654 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-03-17 00:43:33.488685 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-03-17 00:43:33.488697 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-03-17 00:43:33.488707 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-03-17 00:43:33.488718 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-03-17 00:43:33.488729 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-03-17 00:43:33.488749 | orchestrator | 2026-03-17 00:43:33.596504 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-03-17 00:43:33.604947 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-03-17 00:43:33.617279 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-03-17 00:43:33.628331 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-03-17 00:43:33.638006 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-03-17 00:43:33.652740 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/320-openstack-minimal.sh /usr/local/bin/deploy-openstack-minimal 2026-03-17 00:43:33.664385 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-03-17 00:43:33.677856 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-03-17 00:43:33.693601 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-03-17 00:43:33.709770 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade-manager.sh /usr/local/bin/upgrade-manager 2026-03-17 00:43:33.723439 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-03-17 00:43:33.737296 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-03-17 00:43:33.749512 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-03-17 00:43:33.759834 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-03-17 00:43:33.775263 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/320-openstack-minimal.sh /usr/local/bin/upgrade-openstack-minimal 2026-03-17 00:43:33.785913 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-03-17 00:43:33.796636 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-03-17 00:43:33.807108 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-03-17 00:43:33.820564 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-03-17 00:43:33.830375 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2026-03-17 00:43:33.840330 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-03-17 00:43:33.850008 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-03-17 00:43:33.859707 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-03-17 00:43:33.875624 | orchestrator | + [[ false == \t\r\u\e ]] 2026-03-17 00:43:34.389032 | orchestrator | ok: Runtime: 0:25:20.948521 2026-03-17 00:43:34.505959 | 2026-03-17 00:43:34.506108 | TASK [Deploy services] 2026-03-17 00:43:35.040057 | orchestrator | skipping: Conditional result was False 2026-03-17 00:43:35.063400 | 2026-03-17 00:43:35.063692 | TASK [Deploy in a nutshell] 2026-03-17 00:43:35.787959 | orchestrator | + set -e 2026-03-17 00:43:35.788219 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-17 00:43:35.788249 | orchestrator | ++ export INTERACTIVE=false 2026-03-17 00:43:35.788272 | orchestrator | ++ INTERACTIVE=false 2026-03-17 00:43:35.788286 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-17 00:43:35.788299 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-17 00:43:35.788313 | orchestrator | + source /opt/manager-vars.sh 2026-03-17 00:43:35.788355 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-17 00:43:35.788384 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-17 00:43:35.788399 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-17 00:43:35.788415 | orchestrator | ++ CEPH_VERSION=reef 2026-03-17 00:43:35.788428 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-17 00:43:35.788446 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-17 00:43:35.788457 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-17 00:43:35.788478 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-17 00:43:35.788489 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-17 00:43:35.788503 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-17 00:43:35.788514 | orchestrator | ++ export ARA=false 2026-03-17 00:43:35.788526 | orchestrator | ++ ARA=false 2026-03-17 00:43:35.788537 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-17 00:43:35.788550 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-17 00:43:35.788561 | orchestrator | ++ export TEMPEST=true 2026-03-17 00:43:35.788572 | orchestrator | ++ TEMPEST=true 2026-03-17 00:43:35.788583 | orchestrator | ++ export IS_ZUUL=true 2026-03-17 00:43:35.788594 | orchestrator | ++ IS_ZUUL=true 2026-03-17 00:43:35.788606 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.53 2026-03-17 00:43:35.788617 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.53 2026-03-17 00:43:35.788628 | orchestrator | ++ export EXTERNAL_API=false 2026-03-17 00:43:35.788639 | orchestrator | ++ EXTERNAL_API=false 2026-03-17 00:43:35.788650 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-17 00:43:35.788661 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-17 00:43:35.788703 | orchestrator | 2026-03-17 00:43:35.788727 | orchestrator | # PULL IMAGES 2026-03-17 00:43:35.788739 | orchestrator | 2026-03-17 00:43:35.788750 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-17 00:43:35.788761 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-17 00:43:35.788772 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-17 00:43:35.788817 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-17 00:43:35.788832 | orchestrator | + echo 2026-03-17 00:43:35.788843 | orchestrator | + echo '# PULL IMAGES' 2026-03-17 00:43:35.788854 | orchestrator | + echo 2026-03-17 00:43:35.788898 | orchestrator | ++ semver latest 7.0.0 2026-03-17 00:43:35.837964 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-17 00:43:35.838140 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-17 00:43:35.838162 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-03-17 00:43:36.933688 | orchestrator | 2026-03-17 00:43:36 | INFO  | Trying to run play pull-images in environment custom 2026-03-17 00:43:46.982409 | orchestrator | 2026-03-17 00:43:46 | INFO  | Prepare task for execution of pull-images. 2026-03-17 00:43:47.059956 | orchestrator | 2026-03-17 00:43:47 | INFO  | Task 13b6f3c8-1904-431d-af0f-4aa0942e4f73 (pull-images) was prepared for execution. 2026-03-17 00:43:47.064005 | orchestrator | 2026-03-17 00:43:47 | INFO  | Task 13b6f3c8-1904-431d-af0f-4aa0942e4f73 is running in background. No more output. Check ARA for logs. 2026-03-17 00:43:48.440142 | orchestrator | 2026-03-17 00:43:48 | INFO  | Trying to run play wipe-partitions in environment custom 2026-03-17 00:43:58.516513 | orchestrator | 2026-03-17 00:43:58 | INFO  | Prepare task for execution of wipe-partitions. 2026-03-17 00:43:58.597321 | orchestrator | 2026-03-17 00:43:58 | INFO  | Task 803b97da-8a8e-435d-aa43-e1e91c1d9099 (wipe-partitions) was prepared for execution. 2026-03-17 00:43:58.597424 | orchestrator | 2026-03-17 00:43:58 | INFO  | It takes a moment until task 803b97da-8a8e-435d-aa43-e1e91c1d9099 (wipe-partitions) has been started and output is visible here. 2026-03-17 00:44:11.390353 | orchestrator | 2026-03-17 00:44:11.390437 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-03-17 00:44:11.390448 | orchestrator | 2026-03-17 00:44:11.390456 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-03-17 00:44:11.390469 | orchestrator | Tuesday 17 March 2026 00:44:01 +0000 (0:00:00.173) 0:00:00.173 ********* 2026-03-17 00:44:11.390497 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:44:11.390506 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:44:11.390513 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:44:11.390519 | orchestrator | 2026-03-17 00:44:11.390526 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-03-17 00:44:11.390533 | orchestrator | Tuesday 17 March 2026 00:44:03 +0000 (0:00:01.222) 0:00:01.395 ********* 2026-03-17 00:44:11.390542 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:44:11.390549 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:44:11.390556 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:44:11.390563 | orchestrator | 2026-03-17 00:44:11.390570 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-03-17 00:44:11.390577 | orchestrator | Tuesday 17 March 2026 00:44:03 +0000 (0:00:00.245) 0:00:01.640 ********* 2026-03-17 00:44:11.390583 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:44:11.390590 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:44:11.390597 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:44:11.390603 | orchestrator | 2026-03-17 00:44:11.390610 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-03-17 00:44:11.390617 | orchestrator | Tuesday 17 March 2026 00:44:03 +0000 (0:00:00.606) 0:00:02.246 ********* 2026-03-17 00:44:11.390623 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:44:11.390630 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:44:11.390637 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:44:11.390643 | orchestrator | 2026-03-17 00:44:11.390650 | orchestrator | TASK [Check device availability] *********************************************** 2026-03-17 00:44:11.390657 | orchestrator | Tuesday 17 March 2026 00:44:04 +0000 (0:00:00.217) 0:00:02.464 ********* 2026-03-17 00:44:11.390664 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-03-17 00:44:11.390673 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-03-17 00:44:11.390680 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-03-17 00:44:11.390686 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-03-17 00:44:11.390693 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-03-17 00:44:11.390700 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-03-17 00:44:11.390706 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-03-17 00:44:11.390713 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-03-17 00:44:11.390720 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-03-17 00:44:11.390726 | orchestrator | 2026-03-17 00:44:11.390733 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-03-17 00:44:11.390740 | orchestrator | Tuesday 17 March 2026 00:44:05 +0000 (0:00:01.341) 0:00:03.806 ********* 2026-03-17 00:44:11.390747 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-03-17 00:44:11.390754 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-03-17 00:44:11.390760 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-03-17 00:44:11.390767 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-03-17 00:44:11.390774 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-03-17 00:44:11.390780 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-03-17 00:44:11.390787 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-03-17 00:44:11.390793 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-03-17 00:44:11.390800 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-03-17 00:44:11.390806 | orchestrator | 2026-03-17 00:44:11.390818 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-03-17 00:44:11.390825 | orchestrator | Tuesday 17 March 2026 00:44:06 +0000 (0:00:01.379) 0:00:05.185 ********* 2026-03-17 00:44:11.390831 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-03-17 00:44:11.390838 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-03-17 00:44:11.390844 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-03-17 00:44:11.390851 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-03-17 00:44:11.390863 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-03-17 00:44:11.390870 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-03-17 00:44:11.390876 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-03-17 00:44:11.390883 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-03-17 00:44:11.390890 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-03-17 00:44:11.390896 | orchestrator | 2026-03-17 00:44:11.390903 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-03-17 00:44:11.390911 | orchestrator | Tuesday 17 March 2026 00:44:09 +0000 (0:00:03.107) 0:00:08.292 ********* 2026-03-17 00:44:11.390919 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:44:11.390926 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:44:11.390934 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:44:11.390941 | orchestrator | 2026-03-17 00:44:11.390949 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-03-17 00:44:11.390956 | orchestrator | Tuesday 17 March 2026 00:44:10 +0000 (0:00:00.587) 0:00:08.880 ********* 2026-03-17 00:44:11.390964 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:44:11.390971 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:44:11.390979 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:44:11.390987 | orchestrator | 2026-03-17 00:44:11.390994 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:44:11.391003 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:44:11.391011 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:44:11.391079 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:44:11.391089 | orchestrator | 2026-03-17 00:44:11.391097 | orchestrator | 2026-03-17 00:44:11.391105 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:44:11.391112 | orchestrator | Tuesday 17 March 2026 00:44:11 +0000 (0:00:00.632) 0:00:09.513 ********* 2026-03-17 00:44:11.391120 | orchestrator | =============================================================================== 2026-03-17 00:44:11.391128 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 3.11s 2026-03-17 00:44:11.391136 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.38s 2026-03-17 00:44:11.391144 | orchestrator | Check device availability ----------------------------------------------- 1.34s 2026-03-17 00:44:11.391152 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 1.22s 2026-03-17 00:44:11.391159 | orchestrator | Request device events from the kernel ----------------------------------- 0.63s 2026-03-17 00:44:11.391167 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.61s 2026-03-17 00:44:11.391175 | orchestrator | Reload udev rules ------------------------------------------------------- 0.59s 2026-03-17 00:44:11.391183 | orchestrator | Remove all rook related logical devices --------------------------------- 0.25s 2026-03-17 00:44:11.391190 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.22s 2026-03-17 00:44:22.794455 | orchestrator | 2026-03-17 00:44:22 | INFO  | Prepare task for execution of facts. 2026-03-17 00:44:22.871652 | orchestrator | 2026-03-17 00:44:22 | INFO  | Task 4e5874e7-34db-4c94-9dfe-62bc89fb8480 (facts) was prepared for execution. 2026-03-17 00:44:22.871746 | orchestrator | 2026-03-17 00:44:22 | INFO  | It takes a moment until task 4e5874e7-34db-4c94-9dfe-62bc89fb8480 (facts) has been started and output is visible here. 2026-03-17 00:44:34.353316 | orchestrator | 2026-03-17 00:44:34.353422 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-17 00:44:34.353439 | orchestrator | 2026-03-17 00:44:34.353477 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-17 00:44:34.353491 | orchestrator | Tuesday 17 March 2026 00:44:25 +0000 (0:00:00.295) 0:00:00.295 ********* 2026-03-17 00:44:34.353500 | orchestrator | ok: [testbed-manager] 2026-03-17 00:44:34.353508 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:44:34.353516 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:44:34.353523 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:44:34.353530 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:44:34.353539 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:44:34.353551 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:44:34.353563 | orchestrator | 2026-03-17 00:44:34.353576 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-17 00:44:34.353588 | orchestrator | Tuesday 17 March 2026 00:44:27 +0000 (0:00:01.236) 0:00:01.531 ********* 2026-03-17 00:44:34.353600 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:44:34.353611 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:44:34.353618 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:44:34.353627 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:44:34.353638 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:44:34.353651 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:44:34.353663 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:44:34.353675 | orchestrator | 2026-03-17 00:44:34.353686 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-17 00:44:34.353713 | orchestrator | 2026-03-17 00:44:34.353721 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-17 00:44:34.353729 | orchestrator | Tuesday 17 March 2026 00:44:28 +0000 (0:00:01.069) 0:00:02.601 ********* 2026-03-17 00:44:34.353736 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:44:34.353743 | orchestrator | ok: [testbed-manager] 2026-03-17 00:44:34.353750 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:44:34.353757 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:44:34.353765 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:44:34.353772 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:44:34.353779 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:44:34.353786 | orchestrator | 2026-03-17 00:44:34.353793 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-17 00:44:34.353800 | orchestrator | 2026-03-17 00:44:34.353808 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-17 00:44:34.353815 | orchestrator | Tuesday 17 March 2026 00:44:33 +0000 (0:00:05.427) 0:00:08.028 ********* 2026-03-17 00:44:34.353822 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:44:34.353829 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:44:34.353836 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:44:34.353843 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:44:34.353850 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:44:34.353858 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:44:34.353868 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:44:34.353880 | orchestrator | 2026-03-17 00:44:34.353893 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:44:34.353906 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:44:34.353921 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:44:34.353933 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:44:34.353947 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:44:34.353955 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:44:34.353971 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:44:34.353980 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:44:34.353988 | orchestrator | 2026-03-17 00:44:34.353997 | orchestrator | 2026-03-17 00:44:34.354075 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:44:34.354090 | orchestrator | Tuesday 17 March 2026 00:44:34 +0000 (0:00:00.494) 0:00:08.523 ********* 2026-03-17 00:44:34.354101 | orchestrator | =============================================================================== 2026-03-17 00:44:34.354110 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.43s 2026-03-17 00:44:34.354117 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.24s 2026-03-17 00:44:34.354125 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.07s 2026-03-17 00:44:34.354132 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.50s 2026-03-17 00:44:35.790558 | orchestrator | 2026-03-17 00:44:35 | INFO  | Prepare task for execution of ceph-configure-lvm-volumes. 2026-03-17 00:44:35.853844 | orchestrator | 2026-03-17 00:44:35 | INFO  | Task cbf9d2bd-347e-4129-bb48-830b897ea844 (ceph-configure-lvm-volumes) was prepared for execution. 2026-03-17 00:44:35.853949 | orchestrator | 2026-03-17 00:44:35 | INFO  | It takes a moment until task cbf9d2bd-347e-4129-bb48-830b897ea844 (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-03-17 00:44:46.589461 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-17 00:44:46.589556 | orchestrator | 2.16.14 2026-03-17 00:44:46.589567 | orchestrator | 2026-03-17 00:44:46.589575 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-17 00:44:46.589583 | orchestrator | 2026-03-17 00:44:46.589590 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-17 00:44:46.589597 | orchestrator | Tuesday 17 March 2026 00:44:40 +0000 (0:00:00.273) 0:00:00.273 ********* 2026-03-17 00:44:46.589605 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-17 00:44:46.589612 | orchestrator | 2026-03-17 00:44:46.589618 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-17 00:44:46.589625 | orchestrator | Tuesday 17 March 2026 00:44:40 +0000 (0:00:00.234) 0:00:00.508 ********* 2026-03-17 00:44:46.589633 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:44:46.589640 | orchestrator | 2026-03-17 00:44:46.589646 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:44:46.589653 | orchestrator | Tuesday 17 March 2026 00:44:40 +0000 (0:00:00.197) 0:00:00.706 ********* 2026-03-17 00:44:46.589667 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-03-17 00:44:46.589674 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-03-17 00:44:46.589681 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-03-17 00:44:46.589688 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-03-17 00:44:46.589694 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-03-17 00:44:46.589701 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-03-17 00:44:46.589707 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-03-17 00:44:46.589714 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-03-17 00:44:46.589721 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-03-17 00:44:46.589727 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-03-17 00:44:46.589748 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-03-17 00:44:46.589755 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-03-17 00:44:46.589762 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-03-17 00:44:46.589770 | orchestrator | 2026-03-17 00:44:46.589781 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:44:46.589792 | orchestrator | Tuesday 17 March 2026 00:44:40 +0000 (0:00:00.342) 0:00:01.048 ********* 2026-03-17 00:44:46.589804 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:44:46.589820 | orchestrator | 2026-03-17 00:44:46.589835 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:44:46.589844 | orchestrator | Tuesday 17 March 2026 00:44:41 +0000 (0:00:00.389) 0:00:01.438 ********* 2026-03-17 00:44:46.589854 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:44:46.589864 | orchestrator | 2026-03-17 00:44:46.589874 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:44:46.589888 | orchestrator | Tuesday 17 March 2026 00:44:41 +0000 (0:00:00.174) 0:00:01.613 ********* 2026-03-17 00:44:46.589898 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:44:46.589909 | orchestrator | 2026-03-17 00:44:46.589919 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:44:46.589929 | orchestrator | Tuesday 17 March 2026 00:44:41 +0000 (0:00:00.170) 0:00:01.783 ********* 2026-03-17 00:44:46.589939 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:44:46.589950 | orchestrator | 2026-03-17 00:44:46.589960 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:44:46.589970 | orchestrator | Tuesday 17 March 2026 00:44:41 +0000 (0:00:00.163) 0:00:01.947 ********* 2026-03-17 00:44:46.589981 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:44:46.590073 | orchestrator | 2026-03-17 00:44:46.590083 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:44:46.590091 | orchestrator | Tuesday 17 March 2026 00:44:42 +0000 (0:00:00.172) 0:00:02.119 ********* 2026-03-17 00:44:46.590098 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:44:46.590106 | orchestrator | 2026-03-17 00:44:46.590113 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:44:46.590120 | orchestrator | Tuesday 17 March 2026 00:44:42 +0000 (0:00:00.194) 0:00:02.314 ********* 2026-03-17 00:44:46.590128 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:44:46.590136 | orchestrator | 2026-03-17 00:44:46.590143 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:44:46.590151 | orchestrator | Tuesday 17 March 2026 00:44:42 +0000 (0:00:00.184) 0:00:02.498 ********* 2026-03-17 00:44:46.590158 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:44:46.590165 | orchestrator | 2026-03-17 00:44:46.590172 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:44:46.590180 | orchestrator | Tuesday 17 March 2026 00:44:42 +0000 (0:00:00.171) 0:00:02.670 ********* 2026-03-17 00:44:46.590188 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_fef12aab-9308-4371-8ae3-fd48e064f393) 2026-03-17 00:44:46.590201 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_fef12aab-9308-4371-8ae3-fd48e064f393) 2026-03-17 00:44:46.590212 | orchestrator | 2026-03-17 00:44:46.590230 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:44:46.590263 | orchestrator | Tuesday 17 March 2026 00:44:42 +0000 (0:00:00.358) 0:00:03.028 ********* 2026-03-17 00:44:46.590276 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_f65971dd-3d8e-4ccb-8892-9cef1457b08b) 2026-03-17 00:44:46.590288 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_f65971dd-3d8e-4ccb-8892-9cef1457b08b) 2026-03-17 00:44:46.590299 | orchestrator | 2026-03-17 00:44:46.590318 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:44:46.590341 | orchestrator | Tuesday 17 March 2026 00:44:43 +0000 (0:00:00.388) 0:00:03.416 ********* 2026-03-17 00:44:46.590353 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_8140ca94-7747-4c81-b89b-0d83b2f23451) 2026-03-17 00:44:46.590365 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_8140ca94-7747-4c81-b89b-0d83b2f23451) 2026-03-17 00:44:46.590378 | orchestrator | 2026-03-17 00:44:46.590390 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:44:46.590402 | orchestrator | Tuesday 17 March 2026 00:44:43 +0000 (0:00:00.520) 0:00:03.937 ********* 2026-03-17 00:44:46.590414 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_1afbae95-f964-4c90-9c71-9e7629ff9c63) 2026-03-17 00:44:46.590426 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_1afbae95-f964-4c90-9c71-9e7629ff9c63) 2026-03-17 00:44:46.590438 | orchestrator | 2026-03-17 00:44:46.590449 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:44:46.590462 | orchestrator | Tuesday 17 March 2026 00:44:44 +0000 (0:00:00.552) 0:00:04.489 ********* 2026-03-17 00:44:46.590474 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-17 00:44:46.590486 | orchestrator | 2026-03-17 00:44:46.590498 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:44:46.590510 | orchestrator | Tuesday 17 March 2026 00:44:44 +0000 (0:00:00.596) 0:00:05.085 ********* 2026-03-17 00:44:46.590522 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-03-17 00:44:46.590535 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-03-17 00:44:46.590546 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-03-17 00:44:46.590559 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-03-17 00:44:46.590570 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-03-17 00:44:46.590582 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-03-17 00:44:46.590594 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-03-17 00:44:46.590607 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-03-17 00:44:46.590618 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-03-17 00:44:46.590631 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-03-17 00:44:46.590642 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-03-17 00:44:46.590654 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-03-17 00:44:46.590666 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-03-17 00:44:46.590678 | orchestrator | 2026-03-17 00:44:46.590690 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:44:46.590702 | orchestrator | Tuesday 17 March 2026 00:44:45 +0000 (0:00:00.336) 0:00:05.422 ********* 2026-03-17 00:44:46.590714 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:44:46.590726 | orchestrator | 2026-03-17 00:44:46.590737 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:44:46.590749 | orchestrator | Tuesday 17 March 2026 00:44:45 +0000 (0:00:00.187) 0:00:05.610 ********* 2026-03-17 00:44:46.590760 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:44:46.590772 | orchestrator | 2026-03-17 00:44:46.590784 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:44:46.590796 | orchestrator | Tuesday 17 March 2026 00:44:45 +0000 (0:00:00.169) 0:00:05.780 ********* 2026-03-17 00:44:46.590808 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:44:46.590827 | orchestrator | 2026-03-17 00:44:46.590839 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:44:46.590851 | orchestrator | Tuesday 17 March 2026 00:44:45 +0000 (0:00:00.176) 0:00:05.956 ********* 2026-03-17 00:44:46.590863 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:44:46.590875 | orchestrator | 2026-03-17 00:44:46.590887 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:44:46.590899 | orchestrator | Tuesday 17 March 2026 00:44:46 +0000 (0:00:00.183) 0:00:06.139 ********* 2026-03-17 00:44:46.590912 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:44:46.590923 | orchestrator | 2026-03-17 00:44:46.590934 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:44:46.590945 | orchestrator | Tuesday 17 March 2026 00:44:46 +0000 (0:00:00.174) 0:00:06.314 ********* 2026-03-17 00:44:46.590956 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:44:46.590967 | orchestrator | 2026-03-17 00:44:46.590979 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:44:46.591012 | orchestrator | Tuesday 17 March 2026 00:44:46 +0000 (0:00:00.178) 0:00:06.492 ********* 2026-03-17 00:44:46.591024 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:44:46.591036 | orchestrator | 2026-03-17 00:44:46.591056 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:44:53.239219 | orchestrator | Tuesday 17 March 2026 00:44:46 +0000 (0:00:00.183) 0:00:06.676 ********* 2026-03-17 00:44:53.239352 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:44:53.239375 | orchestrator | 2026-03-17 00:44:53.239390 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:44:53.239401 | orchestrator | Tuesday 17 March 2026 00:44:46 +0000 (0:00:00.189) 0:00:06.865 ********* 2026-03-17 00:44:53.239411 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-03-17 00:44:53.239421 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-03-17 00:44:53.239432 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-03-17 00:44:53.239441 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-03-17 00:44:53.239451 | orchestrator | 2026-03-17 00:44:53.239461 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:44:53.239490 | orchestrator | Tuesday 17 March 2026 00:44:47 +0000 (0:00:00.820) 0:00:07.686 ********* 2026-03-17 00:44:53.239501 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:44:53.239510 | orchestrator | 2026-03-17 00:44:53.239520 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:44:53.239530 | orchestrator | Tuesday 17 March 2026 00:44:47 +0000 (0:00:00.179) 0:00:07.865 ********* 2026-03-17 00:44:53.239546 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:44:53.239560 | orchestrator | 2026-03-17 00:44:53.239570 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:44:53.239580 | orchestrator | Tuesday 17 March 2026 00:44:47 +0000 (0:00:00.186) 0:00:08.051 ********* 2026-03-17 00:44:53.239590 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:44:53.239599 | orchestrator | 2026-03-17 00:44:53.239609 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:44:53.239619 | orchestrator | Tuesday 17 March 2026 00:44:48 +0000 (0:00:00.208) 0:00:08.260 ********* 2026-03-17 00:44:53.239629 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:44:53.239639 | orchestrator | 2026-03-17 00:44:53.239648 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-17 00:44:53.239658 | orchestrator | Tuesday 17 March 2026 00:44:48 +0000 (0:00:00.183) 0:00:08.443 ********* 2026-03-17 00:44:53.239668 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-03-17 00:44:53.239678 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-03-17 00:44:53.239688 | orchestrator | 2026-03-17 00:44:53.239698 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-17 00:44:53.239707 | orchestrator | Tuesday 17 March 2026 00:44:48 +0000 (0:00:00.155) 0:00:08.599 ********* 2026-03-17 00:44:53.239755 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:44:53.239767 | orchestrator | 2026-03-17 00:44:53.239778 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-17 00:44:53.239790 | orchestrator | Tuesday 17 March 2026 00:44:48 +0000 (0:00:00.129) 0:00:08.728 ********* 2026-03-17 00:44:53.239801 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:44:53.239812 | orchestrator | 2026-03-17 00:44:53.239824 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-17 00:44:53.239835 | orchestrator | Tuesday 17 March 2026 00:44:48 +0000 (0:00:00.110) 0:00:08.839 ********* 2026-03-17 00:44:53.239847 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:44:53.239858 | orchestrator | 2026-03-17 00:44:53.239869 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-17 00:44:53.239880 | orchestrator | Tuesday 17 March 2026 00:44:48 +0000 (0:00:00.123) 0:00:08.962 ********* 2026-03-17 00:44:53.239891 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:44:53.239902 | orchestrator | 2026-03-17 00:44:53.239913 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-17 00:44:53.239924 | orchestrator | Tuesday 17 March 2026 00:44:49 +0000 (0:00:00.131) 0:00:09.094 ********* 2026-03-17 00:44:53.239937 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '45fdc78c-b598-5156-b36d-ba4cd7c12386'}}) 2026-03-17 00:44:53.239949 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2b5d6da3-626f-5c09-a421-20ac1510e3d2'}}) 2026-03-17 00:44:53.239958 | orchestrator | 2026-03-17 00:44:53.239968 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-17 00:44:53.239978 | orchestrator | Tuesday 17 March 2026 00:44:49 +0000 (0:00:00.151) 0:00:09.246 ********* 2026-03-17 00:44:53.240089 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '45fdc78c-b598-5156-b36d-ba4cd7c12386'}})  2026-03-17 00:44:53.240108 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2b5d6da3-626f-5c09-a421-20ac1510e3d2'}})  2026-03-17 00:44:53.240123 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:44:53.240134 | orchestrator | 2026-03-17 00:44:53.240143 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-17 00:44:53.240153 | orchestrator | Tuesday 17 March 2026 00:44:49 +0000 (0:00:00.134) 0:00:09.380 ********* 2026-03-17 00:44:53.240163 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '45fdc78c-b598-5156-b36d-ba4cd7c12386'}})  2026-03-17 00:44:53.240173 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2b5d6da3-626f-5c09-a421-20ac1510e3d2'}})  2026-03-17 00:44:53.240182 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:44:53.240192 | orchestrator | 2026-03-17 00:44:53.240201 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-17 00:44:53.240211 | orchestrator | Tuesday 17 March 2026 00:44:49 +0000 (0:00:00.268) 0:00:09.648 ********* 2026-03-17 00:44:53.240227 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '45fdc78c-b598-5156-b36d-ba4cd7c12386'}})  2026-03-17 00:44:53.240264 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2b5d6da3-626f-5c09-a421-20ac1510e3d2'}})  2026-03-17 00:44:53.240276 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:44:53.240285 | orchestrator | 2026-03-17 00:44:53.240295 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-17 00:44:53.240305 | orchestrator | Tuesday 17 March 2026 00:44:49 +0000 (0:00:00.135) 0:00:09.784 ********* 2026-03-17 00:44:53.240314 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:44:53.240324 | orchestrator | 2026-03-17 00:44:53.240334 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-17 00:44:53.240343 | orchestrator | Tuesday 17 March 2026 00:44:49 +0000 (0:00:00.129) 0:00:09.913 ********* 2026-03-17 00:44:53.240353 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:44:53.240372 | orchestrator | 2026-03-17 00:44:53.240382 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-17 00:44:53.240395 | orchestrator | Tuesday 17 March 2026 00:44:49 +0000 (0:00:00.127) 0:00:10.040 ********* 2026-03-17 00:44:53.240409 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:44:53.240420 | orchestrator | 2026-03-17 00:44:53.240429 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-17 00:44:53.240439 | orchestrator | Tuesday 17 March 2026 00:44:50 +0000 (0:00:00.125) 0:00:10.166 ********* 2026-03-17 00:44:53.240448 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:44:53.240458 | orchestrator | 2026-03-17 00:44:53.240467 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-17 00:44:53.240477 | orchestrator | Tuesday 17 March 2026 00:44:50 +0000 (0:00:00.112) 0:00:10.278 ********* 2026-03-17 00:44:53.240487 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:44:53.240496 | orchestrator | 2026-03-17 00:44:53.240506 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-17 00:44:53.240515 | orchestrator | Tuesday 17 March 2026 00:44:50 +0000 (0:00:00.134) 0:00:10.413 ********* 2026-03-17 00:44:53.240525 | orchestrator | ok: [testbed-node-3] => { 2026-03-17 00:44:53.240534 | orchestrator |  "ceph_osd_devices": { 2026-03-17 00:44:53.240544 | orchestrator |  "sdb": { 2026-03-17 00:44:53.240554 | orchestrator |  "osd_lvm_uuid": "45fdc78c-b598-5156-b36d-ba4cd7c12386" 2026-03-17 00:44:53.240564 | orchestrator |  }, 2026-03-17 00:44:53.240574 | orchestrator |  "sdc": { 2026-03-17 00:44:53.240584 | orchestrator |  "osd_lvm_uuid": "2b5d6da3-626f-5c09-a421-20ac1510e3d2" 2026-03-17 00:44:53.240593 | orchestrator |  } 2026-03-17 00:44:53.240603 | orchestrator |  } 2026-03-17 00:44:53.240613 | orchestrator | } 2026-03-17 00:44:53.240623 | orchestrator | 2026-03-17 00:44:53.240632 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-17 00:44:53.240642 | orchestrator | Tuesday 17 March 2026 00:44:50 +0000 (0:00:00.123) 0:00:10.537 ********* 2026-03-17 00:44:53.240650 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:44:53.240658 | orchestrator | 2026-03-17 00:44:53.240666 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-17 00:44:53.240674 | orchestrator | Tuesday 17 March 2026 00:44:50 +0000 (0:00:00.109) 0:00:10.646 ********* 2026-03-17 00:44:53.240682 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:44:53.240690 | orchestrator | 2026-03-17 00:44:53.240698 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-17 00:44:53.240706 | orchestrator | Tuesday 17 March 2026 00:44:50 +0000 (0:00:00.095) 0:00:10.742 ********* 2026-03-17 00:44:53.240714 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:44:53.240721 | orchestrator | 2026-03-17 00:44:53.240729 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-17 00:44:53.240737 | orchestrator | Tuesday 17 March 2026 00:44:50 +0000 (0:00:00.100) 0:00:10.842 ********* 2026-03-17 00:44:53.240745 | orchestrator | changed: [testbed-node-3] => { 2026-03-17 00:44:53.240752 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-17 00:44:53.240760 | orchestrator |  "ceph_osd_devices": { 2026-03-17 00:44:53.240768 | orchestrator |  "sdb": { 2026-03-17 00:44:53.240776 | orchestrator |  "osd_lvm_uuid": "45fdc78c-b598-5156-b36d-ba4cd7c12386" 2026-03-17 00:44:53.240784 | orchestrator |  }, 2026-03-17 00:44:53.240792 | orchestrator |  "sdc": { 2026-03-17 00:44:53.240800 | orchestrator |  "osd_lvm_uuid": "2b5d6da3-626f-5c09-a421-20ac1510e3d2" 2026-03-17 00:44:53.240808 | orchestrator |  } 2026-03-17 00:44:53.240816 | orchestrator |  }, 2026-03-17 00:44:53.240824 | orchestrator |  "lvm_volumes": [ 2026-03-17 00:44:53.240831 | orchestrator |  { 2026-03-17 00:44:53.240840 | orchestrator |  "data": "osd-block-45fdc78c-b598-5156-b36d-ba4cd7c12386", 2026-03-17 00:44:53.240847 | orchestrator |  "data_vg": "ceph-45fdc78c-b598-5156-b36d-ba4cd7c12386" 2026-03-17 00:44:53.240861 | orchestrator |  }, 2026-03-17 00:44:53.240868 | orchestrator |  { 2026-03-17 00:44:53.240876 | orchestrator |  "data": "osd-block-2b5d6da3-626f-5c09-a421-20ac1510e3d2", 2026-03-17 00:44:53.240884 | orchestrator |  "data_vg": "ceph-2b5d6da3-626f-5c09-a421-20ac1510e3d2" 2026-03-17 00:44:53.240892 | orchestrator |  } 2026-03-17 00:44:53.240900 | orchestrator |  ] 2026-03-17 00:44:53.240908 | orchestrator |  } 2026-03-17 00:44:53.240916 | orchestrator | } 2026-03-17 00:44:53.240923 | orchestrator | 2026-03-17 00:44:53.240931 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-17 00:44:53.240939 | orchestrator | Tuesday 17 March 2026 00:44:50 +0000 (0:00:00.161) 0:00:11.004 ********* 2026-03-17 00:44:53.240947 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-17 00:44:53.240955 | orchestrator | 2026-03-17 00:44:53.240962 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-17 00:44:53.240970 | orchestrator | 2026-03-17 00:44:53.240978 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-17 00:44:53.241005 | orchestrator | Tuesday 17 March 2026 00:44:52 +0000 (0:00:01.899) 0:00:12.903 ********* 2026-03-17 00:44:53.241013 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-17 00:44:53.241021 | orchestrator | 2026-03-17 00:44:53.241029 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-17 00:44:53.241037 | orchestrator | Tuesday 17 March 2026 00:44:53 +0000 (0:00:00.218) 0:00:13.122 ********* 2026-03-17 00:44:53.241045 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:44:53.241053 | orchestrator | 2026-03-17 00:44:53.241066 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:44:59.909383 | orchestrator | Tuesday 17 March 2026 00:44:53 +0000 (0:00:00.205) 0:00:13.327 ********* 2026-03-17 00:44:59.909521 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-03-17 00:44:59.909548 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-03-17 00:44:59.909568 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-03-17 00:44:59.909588 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-03-17 00:44:59.909607 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-03-17 00:44:59.909626 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-03-17 00:44:59.909645 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-03-17 00:44:59.909670 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-03-17 00:44:59.909689 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-03-17 00:44:59.909708 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-03-17 00:44:59.909719 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-03-17 00:44:59.909730 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-03-17 00:44:59.909763 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-03-17 00:44:59.909774 | orchestrator | 2026-03-17 00:44:59.909786 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:44:59.909797 | orchestrator | Tuesday 17 March 2026 00:44:53 +0000 (0:00:00.326) 0:00:13.654 ********* 2026-03-17 00:44:59.909809 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:44:59.909820 | orchestrator | 2026-03-17 00:44:59.909832 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:44:59.909842 | orchestrator | Tuesday 17 March 2026 00:44:53 +0000 (0:00:00.180) 0:00:13.834 ********* 2026-03-17 00:44:59.909876 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:44:59.909889 | orchestrator | 2026-03-17 00:44:59.909908 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:44:59.909927 | orchestrator | Tuesday 17 March 2026 00:44:53 +0000 (0:00:00.174) 0:00:14.009 ********* 2026-03-17 00:44:59.909948 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:44:59.909967 | orchestrator | 2026-03-17 00:44:59.910086 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:44:59.910103 | orchestrator | Tuesday 17 March 2026 00:44:54 +0000 (0:00:00.184) 0:00:14.193 ********* 2026-03-17 00:44:59.910116 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:44:59.910129 | orchestrator | 2026-03-17 00:44:59.910141 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:44:59.910153 | orchestrator | Tuesday 17 March 2026 00:44:54 +0000 (0:00:00.166) 0:00:14.359 ********* 2026-03-17 00:44:59.910165 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:44:59.910177 | orchestrator | 2026-03-17 00:44:59.910188 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:44:59.910201 | orchestrator | Tuesday 17 March 2026 00:44:54 +0000 (0:00:00.452) 0:00:14.812 ********* 2026-03-17 00:44:59.910213 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:44:59.910225 | orchestrator | 2026-03-17 00:44:59.910237 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:44:59.910248 | orchestrator | Tuesday 17 March 2026 00:44:54 +0000 (0:00:00.176) 0:00:14.988 ********* 2026-03-17 00:44:59.910261 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:44:59.910273 | orchestrator | 2026-03-17 00:44:59.910285 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:44:59.910295 | orchestrator | Tuesday 17 March 2026 00:44:55 +0000 (0:00:00.186) 0:00:15.175 ********* 2026-03-17 00:44:59.910306 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:44:59.910316 | orchestrator | 2026-03-17 00:44:59.910327 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:44:59.910338 | orchestrator | Tuesday 17 March 2026 00:44:55 +0000 (0:00:00.181) 0:00:15.357 ********* 2026-03-17 00:44:59.910348 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_6e08813e-a36b-44d4-8c45-37d944b877b3) 2026-03-17 00:44:59.910360 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_6e08813e-a36b-44d4-8c45-37d944b877b3) 2026-03-17 00:44:59.910371 | orchestrator | 2026-03-17 00:44:59.910381 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:44:59.910392 | orchestrator | Tuesday 17 March 2026 00:44:55 +0000 (0:00:00.375) 0:00:15.732 ********* 2026-03-17 00:44:59.910403 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_83f9c1ee-a593-4773-9f19-cdbbc5179b15) 2026-03-17 00:44:59.910413 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_83f9c1ee-a593-4773-9f19-cdbbc5179b15) 2026-03-17 00:44:59.910424 | orchestrator | 2026-03-17 00:44:59.910435 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:44:59.910445 | orchestrator | Tuesday 17 March 2026 00:44:56 +0000 (0:00:00.387) 0:00:16.120 ********* 2026-03-17 00:44:59.910456 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_a8e3ed1c-2f99-41d3-ad10-61535a4cd08c) 2026-03-17 00:44:59.910466 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_a8e3ed1c-2f99-41d3-ad10-61535a4cd08c) 2026-03-17 00:44:59.910489 | orchestrator | 2026-03-17 00:44:59.910501 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:44:59.910532 | orchestrator | Tuesday 17 March 2026 00:44:56 +0000 (0:00:00.367) 0:00:16.488 ********* 2026-03-17 00:44:59.910544 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_89f9da0d-6b93-4417-9f39-e48f14dc47e8) 2026-03-17 00:44:59.910554 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_89f9da0d-6b93-4417-9f39-e48f14dc47e8) 2026-03-17 00:44:59.910565 | orchestrator | 2026-03-17 00:44:59.910587 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:44:59.910598 | orchestrator | Tuesday 17 March 2026 00:44:56 +0000 (0:00:00.367) 0:00:16.855 ********* 2026-03-17 00:44:59.910608 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-17 00:44:59.910619 | orchestrator | 2026-03-17 00:44:59.910630 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:44:59.910640 | orchestrator | Tuesday 17 March 2026 00:44:57 +0000 (0:00:00.361) 0:00:17.217 ********* 2026-03-17 00:44:59.910651 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-03-17 00:44:59.910662 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-03-17 00:44:59.910681 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-03-17 00:44:59.910692 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-03-17 00:44:59.910703 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-03-17 00:44:59.910713 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-03-17 00:44:59.910724 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-03-17 00:44:59.910734 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-03-17 00:44:59.910745 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-03-17 00:44:59.910755 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-03-17 00:44:59.910766 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-03-17 00:44:59.910776 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-03-17 00:44:59.910787 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-03-17 00:44:59.910797 | orchestrator | 2026-03-17 00:44:59.910808 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:44:59.910819 | orchestrator | Tuesday 17 March 2026 00:44:57 +0000 (0:00:00.354) 0:00:17.571 ********* 2026-03-17 00:44:59.910829 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:44:59.910840 | orchestrator | 2026-03-17 00:44:59.910850 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:44:59.910861 | orchestrator | Tuesday 17 March 2026 00:44:57 +0000 (0:00:00.165) 0:00:17.737 ********* 2026-03-17 00:44:59.910872 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:44:59.910882 | orchestrator | 2026-03-17 00:44:59.910893 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:44:59.910904 | orchestrator | Tuesday 17 March 2026 00:44:58 +0000 (0:00:00.510) 0:00:18.248 ********* 2026-03-17 00:44:59.910914 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:44:59.910925 | orchestrator | 2026-03-17 00:44:59.910935 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:44:59.910946 | orchestrator | Tuesday 17 March 2026 00:44:58 +0000 (0:00:00.180) 0:00:18.428 ********* 2026-03-17 00:44:59.910957 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:44:59.910967 | orchestrator | 2026-03-17 00:44:59.911071 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:44:59.911085 | orchestrator | Tuesday 17 March 2026 00:44:58 +0000 (0:00:00.179) 0:00:18.607 ********* 2026-03-17 00:44:59.911096 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:44:59.911107 | orchestrator | 2026-03-17 00:44:59.911117 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:44:59.911128 | orchestrator | Tuesday 17 March 2026 00:44:58 +0000 (0:00:00.202) 0:00:18.810 ********* 2026-03-17 00:44:59.911139 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:44:59.911158 | orchestrator | 2026-03-17 00:44:59.911169 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:44:59.911179 | orchestrator | Tuesday 17 March 2026 00:44:58 +0000 (0:00:00.169) 0:00:18.980 ********* 2026-03-17 00:44:59.911190 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:44:59.911201 | orchestrator | 2026-03-17 00:44:59.911211 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:44:59.911222 | orchestrator | Tuesday 17 March 2026 00:44:59 +0000 (0:00:00.173) 0:00:19.154 ********* 2026-03-17 00:44:59.911232 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:44:59.911243 | orchestrator | 2026-03-17 00:44:59.911253 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:44:59.911264 | orchestrator | Tuesday 17 March 2026 00:44:59 +0000 (0:00:00.162) 0:00:19.317 ********* 2026-03-17 00:44:59.911275 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-03-17 00:44:59.911286 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-03-17 00:44:59.911297 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-03-17 00:44:59.911308 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-03-17 00:44:59.911318 | orchestrator | 2026-03-17 00:44:59.911329 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:44:59.911340 | orchestrator | Tuesday 17 March 2026 00:44:59 +0000 (0:00:00.582) 0:00:19.899 ********* 2026-03-17 00:44:59.911350 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:45:05.357866 | orchestrator | 2026-03-17 00:45:05.357962 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:45:05.358003 | orchestrator | Tuesday 17 March 2026 00:45:00 +0000 (0:00:00.197) 0:00:20.097 ********* 2026-03-17 00:45:05.358044 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:45:05.358051 | orchestrator | 2026-03-17 00:45:05.358055 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:45:05.358060 | orchestrator | Tuesday 17 March 2026 00:45:00 +0000 (0:00:00.227) 0:00:20.324 ********* 2026-03-17 00:45:05.358064 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:45:05.358068 | orchestrator | 2026-03-17 00:45:05.358072 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:45:05.358076 | orchestrator | Tuesday 17 March 2026 00:45:00 +0000 (0:00:00.210) 0:00:20.535 ********* 2026-03-17 00:45:05.358080 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:45:05.358084 | orchestrator | 2026-03-17 00:45:05.358088 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-17 00:45:05.358092 | orchestrator | Tuesday 17 March 2026 00:45:00 +0000 (0:00:00.192) 0:00:20.727 ********* 2026-03-17 00:45:05.358096 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-03-17 00:45:05.358100 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-03-17 00:45:05.358104 | orchestrator | 2026-03-17 00:45:05.358108 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-17 00:45:05.358126 | orchestrator | Tuesday 17 March 2026 00:45:00 +0000 (0:00:00.251) 0:00:20.978 ********* 2026-03-17 00:45:05.358130 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:45:05.358134 | orchestrator | 2026-03-17 00:45:05.358137 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-17 00:45:05.358141 | orchestrator | Tuesday 17 March 2026 00:45:00 +0000 (0:00:00.109) 0:00:21.088 ********* 2026-03-17 00:45:05.358145 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:45:05.358149 | orchestrator | 2026-03-17 00:45:05.358153 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-17 00:45:05.358161 | orchestrator | Tuesday 17 March 2026 00:45:01 +0000 (0:00:00.106) 0:00:21.195 ********* 2026-03-17 00:45:05.358165 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:45:05.358168 | orchestrator | 2026-03-17 00:45:05.358172 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-17 00:45:05.358176 | orchestrator | Tuesday 17 March 2026 00:45:01 +0000 (0:00:00.108) 0:00:21.304 ********* 2026-03-17 00:45:05.358193 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:45:05.358198 | orchestrator | 2026-03-17 00:45:05.358201 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-17 00:45:05.358205 | orchestrator | Tuesday 17 March 2026 00:45:01 +0000 (0:00:00.102) 0:00:21.406 ********* 2026-03-17 00:45:05.358210 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'dc88f193-a403-571c-9716-867079cb0a77'}}) 2026-03-17 00:45:05.358214 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9e905ad0-9805-5328-aec5-92944dddbd57'}}) 2026-03-17 00:45:05.358218 | orchestrator | 2026-03-17 00:45:05.358221 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-17 00:45:05.358225 | orchestrator | Tuesday 17 March 2026 00:45:01 +0000 (0:00:00.121) 0:00:21.528 ********* 2026-03-17 00:45:05.358230 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'dc88f193-a403-571c-9716-867079cb0a77'}})  2026-03-17 00:45:05.358235 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9e905ad0-9805-5328-aec5-92944dddbd57'}})  2026-03-17 00:45:05.358238 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:45:05.358242 | orchestrator | 2026-03-17 00:45:05.358246 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-17 00:45:05.358250 | orchestrator | Tuesday 17 March 2026 00:45:01 +0000 (0:00:00.107) 0:00:21.635 ********* 2026-03-17 00:45:05.358253 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'dc88f193-a403-571c-9716-867079cb0a77'}})  2026-03-17 00:45:05.358257 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9e905ad0-9805-5328-aec5-92944dddbd57'}})  2026-03-17 00:45:05.358261 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:45:05.358265 | orchestrator | 2026-03-17 00:45:05.358269 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-17 00:45:05.358272 | orchestrator | Tuesday 17 March 2026 00:45:01 +0000 (0:00:00.117) 0:00:21.753 ********* 2026-03-17 00:45:05.358276 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'dc88f193-a403-571c-9716-867079cb0a77'}})  2026-03-17 00:45:05.358280 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9e905ad0-9805-5328-aec5-92944dddbd57'}})  2026-03-17 00:45:05.358284 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:45:05.358287 | orchestrator | 2026-03-17 00:45:05.358291 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-17 00:45:05.358295 | orchestrator | Tuesday 17 March 2026 00:45:01 +0000 (0:00:00.124) 0:00:21.877 ********* 2026-03-17 00:45:05.358299 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:45:05.358302 | orchestrator | 2026-03-17 00:45:05.358306 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-17 00:45:05.358310 | orchestrator | Tuesday 17 March 2026 00:45:01 +0000 (0:00:00.125) 0:00:22.003 ********* 2026-03-17 00:45:05.358313 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:45:05.358317 | orchestrator | 2026-03-17 00:45:05.358321 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-17 00:45:05.358324 | orchestrator | Tuesday 17 March 2026 00:45:02 +0000 (0:00:00.135) 0:00:22.138 ********* 2026-03-17 00:45:05.358339 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:45:05.358343 | orchestrator | 2026-03-17 00:45:05.358346 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-17 00:45:05.358350 | orchestrator | Tuesday 17 March 2026 00:45:02 +0000 (0:00:00.122) 0:00:22.260 ********* 2026-03-17 00:45:05.358354 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:45:05.358358 | orchestrator | 2026-03-17 00:45:05.358361 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-17 00:45:05.358365 | orchestrator | Tuesday 17 March 2026 00:45:02 +0000 (0:00:00.274) 0:00:22.535 ********* 2026-03-17 00:45:05.358369 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:45:05.358376 | orchestrator | 2026-03-17 00:45:05.358380 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-17 00:45:05.358384 | orchestrator | Tuesday 17 March 2026 00:45:02 +0000 (0:00:00.132) 0:00:22.668 ********* 2026-03-17 00:45:05.358387 | orchestrator | ok: [testbed-node-4] => { 2026-03-17 00:45:05.358391 | orchestrator |  "ceph_osd_devices": { 2026-03-17 00:45:05.358395 | orchestrator |  "sdb": { 2026-03-17 00:45:05.358399 | orchestrator |  "osd_lvm_uuid": "dc88f193-a403-571c-9716-867079cb0a77" 2026-03-17 00:45:05.358403 | orchestrator |  }, 2026-03-17 00:45:05.358407 | orchestrator |  "sdc": { 2026-03-17 00:45:05.358411 | orchestrator |  "osd_lvm_uuid": "9e905ad0-9805-5328-aec5-92944dddbd57" 2026-03-17 00:45:05.358415 | orchestrator |  } 2026-03-17 00:45:05.358418 | orchestrator |  } 2026-03-17 00:45:05.358422 | orchestrator | } 2026-03-17 00:45:05.358427 | orchestrator | 2026-03-17 00:45:05.358431 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-17 00:45:05.358435 | orchestrator | Tuesday 17 March 2026 00:45:02 +0000 (0:00:00.114) 0:00:22.782 ********* 2026-03-17 00:45:05.358440 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:45:05.358444 | orchestrator | 2026-03-17 00:45:05.358448 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-17 00:45:05.358453 | orchestrator | Tuesday 17 March 2026 00:45:02 +0000 (0:00:00.130) 0:00:22.913 ********* 2026-03-17 00:45:05.358457 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:45:05.358461 | orchestrator | 2026-03-17 00:45:05.358465 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-17 00:45:05.358469 | orchestrator | Tuesday 17 March 2026 00:45:02 +0000 (0:00:00.098) 0:00:23.011 ********* 2026-03-17 00:45:05.358474 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:45:05.358478 | orchestrator | 2026-03-17 00:45:05.358482 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-17 00:45:05.358489 | orchestrator | Tuesday 17 March 2026 00:45:03 +0000 (0:00:00.099) 0:00:23.111 ********* 2026-03-17 00:45:05.358494 | orchestrator | changed: [testbed-node-4] => { 2026-03-17 00:45:05.358498 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-17 00:45:05.358502 | orchestrator |  "ceph_osd_devices": { 2026-03-17 00:45:05.358507 | orchestrator |  "sdb": { 2026-03-17 00:45:05.358511 | orchestrator |  "osd_lvm_uuid": "dc88f193-a403-571c-9716-867079cb0a77" 2026-03-17 00:45:05.358516 | orchestrator |  }, 2026-03-17 00:45:05.358520 | orchestrator |  "sdc": { 2026-03-17 00:45:05.358524 | orchestrator |  "osd_lvm_uuid": "9e905ad0-9805-5328-aec5-92944dddbd57" 2026-03-17 00:45:05.358529 | orchestrator |  } 2026-03-17 00:45:05.358533 | orchestrator |  }, 2026-03-17 00:45:05.358537 | orchestrator |  "lvm_volumes": [ 2026-03-17 00:45:05.358541 | orchestrator |  { 2026-03-17 00:45:05.358546 | orchestrator |  "data": "osd-block-dc88f193-a403-571c-9716-867079cb0a77", 2026-03-17 00:45:05.358550 | orchestrator |  "data_vg": "ceph-dc88f193-a403-571c-9716-867079cb0a77" 2026-03-17 00:45:05.358554 | orchestrator |  }, 2026-03-17 00:45:05.358558 | orchestrator |  { 2026-03-17 00:45:05.358563 | orchestrator |  "data": "osd-block-9e905ad0-9805-5328-aec5-92944dddbd57", 2026-03-17 00:45:05.358567 | orchestrator |  "data_vg": "ceph-9e905ad0-9805-5328-aec5-92944dddbd57" 2026-03-17 00:45:05.358571 | orchestrator |  } 2026-03-17 00:45:05.358575 | orchestrator |  ] 2026-03-17 00:45:05.358580 | orchestrator |  } 2026-03-17 00:45:05.358584 | orchestrator | } 2026-03-17 00:45:05.358588 | orchestrator | 2026-03-17 00:45:05.358593 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-17 00:45:05.358597 | orchestrator | Tuesday 17 March 2026 00:45:03 +0000 (0:00:00.436) 0:00:23.547 ********* 2026-03-17 00:45:05.358601 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-17 00:45:05.358605 | orchestrator | 2026-03-17 00:45:05.358614 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-17 00:45:05.358618 | orchestrator | 2026-03-17 00:45:05.358622 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-17 00:45:05.358627 | orchestrator | Tuesday 17 March 2026 00:45:04 +0000 (0:00:00.801) 0:00:24.349 ********* 2026-03-17 00:45:05.358631 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-17 00:45:05.358635 | orchestrator | 2026-03-17 00:45:05.358639 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-17 00:45:05.358643 | orchestrator | Tuesday 17 March 2026 00:45:04 +0000 (0:00:00.327) 0:00:24.677 ********* 2026-03-17 00:45:05.358648 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:45:05.358652 | orchestrator | 2026-03-17 00:45:05.358656 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:45:05.358660 | orchestrator | Tuesday 17 March 2026 00:45:05 +0000 (0:00:00.516) 0:00:25.194 ********* 2026-03-17 00:45:05.358664 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-03-17 00:45:05.358668 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-03-17 00:45:05.358673 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-03-17 00:45:05.358677 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-03-17 00:45:05.358681 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-03-17 00:45:05.358688 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-03-17 00:45:12.780439 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-03-17 00:45:12.780550 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-03-17 00:45:12.780567 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-03-17 00:45:12.780587 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-03-17 00:45:12.780606 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-03-17 00:45:12.780624 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-03-17 00:45:12.780636 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-03-17 00:45:12.780647 | orchestrator | 2026-03-17 00:45:12.780659 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:45:12.780671 | orchestrator | Tuesday 17 March 2026 00:45:05 +0000 (0:00:00.322) 0:00:25.516 ********* 2026-03-17 00:45:12.780682 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:45:12.780714 | orchestrator | 2026-03-17 00:45:12.780725 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:45:12.780736 | orchestrator | Tuesday 17 March 2026 00:45:05 +0000 (0:00:00.164) 0:00:25.681 ********* 2026-03-17 00:45:12.780747 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:45:12.780758 | orchestrator | 2026-03-17 00:45:12.780768 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:45:12.780777 | orchestrator | Tuesday 17 March 2026 00:45:05 +0000 (0:00:00.180) 0:00:25.861 ********* 2026-03-17 00:45:12.780787 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:45:12.780796 | orchestrator | 2026-03-17 00:45:12.780821 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:45:12.780838 | orchestrator | Tuesday 17 March 2026 00:45:05 +0000 (0:00:00.175) 0:00:26.037 ********* 2026-03-17 00:45:12.780855 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:45:12.780872 | orchestrator | 2026-03-17 00:45:12.780888 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:45:12.780904 | orchestrator | Tuesday 17 March 2026 00:45:06 +0000 (0:00:00.178) 0:00:26.216 ********* 2026-03-17 00:45:12.780952 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:45:12.780991 | orchestrator | 2026-03-17 00:45:12.781002 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:45:12.781012 | orchestrator | Tuesday 17 March 2026 00:45:06 +0000 (0:00:00.184) 0:00:26.401 ********* 2026-03-17 00:45:12.781021 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:45:12.781031 | orchestrator | 2026-03-17 00:45:12.781041 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:45:12.781051 | orchestrator | Tuesday 17 March 2026 00:45:06 +0000 (0:00:00.167) 0:00:26.569 ********* 2026-03-17 00:45:12.781120 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:45:12.781131 | orchestrator | 2026-03-17 00:45:12.781141 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:45:12.781151 | orchestrator | Tuesday 17 March 2026 00:45:06 +0000 (0:00:00.190) 0:00:26.759 ********* 2026-03-17 00:45:12.781161 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:45:12.781170 | orchestrator | 2026-03-17 00:45:12.781180 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:45:12.781189 | orchestrator | Tuesday 17 March 2026 00:45:06 +0000 (0:00:00.175) 0:00:26.934 ********* 2026-03-17 00:45:12.781199 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_dfd38aa9-0273-4d0b-842d-83e2b920901d) 2026-03-17 00:45:12.781210 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_dfd38aa9-0273-4d0b-842d-83e2b920901d) 2026-03-17 00:45:12.781220 | orchestrator | 2026-03-17 00:45:12.781229 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:45:12.781239 | orchestrator | Tuesday 17 March 2026 00:45:07 +0000 (0:00:00.541) 0:00:27.476 ********* 2026-03-17 00:45:12.781266 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_304f2e06-033e-4696-8bcf-5d7e9425b0ee) 2026-03-17 00:45:12.781277 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_304f2e06-033e-4696-8bcf-5d7e9425b0ee) 2026-03-17 00:45:12.781287 | orchestrator | 2026-03-17 00:45:12.781296 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:45:12.781306 | orchestrator | Tuesday 17 March 2026 00:45:08 +0000 (0:00:00.629) 0:00:28.105 ********* 2026-03-17 00:45:12.781315 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_d33e80f7-c5e3-468e-989c-76b1c28adee9) 2026-03-17 00:45:12.781325 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_d33e80f7-c5e3-468e-989c-76b1c28adee9) 2026-03-17 00:45:12.781335 | orchestrator | 2026-03-17 00:45:12.781352 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:45:12.781369 | orchestrator | Tuesday 17 March 2026 00:45:08 +0000 (0:00:00.363) 0:00:28.469 ********* 2026-03-17 00:45:12.781386 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_fe0d5661-edac-468e-9d1d-014c3e419a65) 2026-03-17 00:45:12.781402 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_fe0d5661-edac-468e-9d1d-014c3e419a65) 2026-03-17 00:45:12.781418 | orchestrator | 2026-03-17 00:45:12.781435 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:45:12.781451 | orchestrator | Tuesday 17 March 2026 00:45:08 +0000 (0:00:00.397) 0:00:28.866 ********* 2026-03-17 00:45:12.781467 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-17 00:45:12.781484 | orchestrator | 2026-03-17 00:45:12.781501 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:45:12.781540 | orchestrator | Tuesday 17 March 2026 00:45:09 +0000 (0:00:00.306) 0:00:29.172 ********* 2026-03-17 00:45:12.781557 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-03-17 00:45:12.781567 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-03-17 00:45:12.781584 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-03-17 00:45:12.781600 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-03-17 00:45:12.781631 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-03-17 00:45:12.781647 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-03-17 00:45:12.781663 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-03-17 00:45:12.781680 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-03-17 00:45:12.781696 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-03-17 00:45:12.781712 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-03-17 00:45:12.781724 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-03-17 00:45:12.781733 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-03-17 00:45:12.781743 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-03-17 00:45:12.781752 | orchestrator | 2026-03-17 00:45:12.781762 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:45:12.781772 | orchestrator | Tuesday 17 March 2026 00:45:09 +0000 (0:00:00.349) 0:00:29.522 ********* 2026-03-17 00:45:12.781781 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:45:12.781790 | orchestrator | 2026-03-17 00:45:12.781800 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:45:12.781809 | orchestrator | Tuesday 17 March 2026 00:45:09 +0000 (0:00:00.191) 0:00:29.713 ********* 2026-03-17 00:45:12.781818 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:45:12.781828 | orchestrator | 2026-03-17 00:45:12.781837 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:45:12.781846 | orchestrator | Tuesday 17 March 2026 00:45:09 +0000 (0:00:00.191) 0:00:29.905 ********* 2026-03-17 00:45:12.781856 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:45:12.781865 | orchestrator | 2026-03-17 00:45:12.781875 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:45:12.781884 | orchestrator | Tuesday 17 March 2026 00:45:09 +0000 (0:00:00.180) 0:00:30.086 ********* 2026-03-17 00:45:12.781894 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:45:12.781912 | orchestrator | 2026-03-17 00:45:12.781928 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:45:12.781945 | orchestrator | Tuesday 17 March 2026 00:45:10 +0000 (0:00:00.189) 0:00:30.276 ********* 2026-03-17 00:45:12.782082 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:45:12.782100 | orchestrator | 2026-03-17 00:45:12.782110 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:45:12.782124 | orchestrator | Tuesday 17 March 2026 00:45:10 +0000 (0:00:00.177) 0:00:30.454 ********* 2026-03-17 00:45:12.782140 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:45:12.782156 | orchestrator | 2026-03-17 00:45:12.782173 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:45:12.782190 | orchestrator | Tuesday 17 March 2026 00:45:10 +0000 (0:00:00.532) 0:00:30.986 ********* 2026-03-17 00:45:12.782205 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:45:12.782222 | orchestrator | 2026-03-17 00:45:12.782233 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:45:12.782242 | orchestrator | Tuesday 17 March 2026 00:45:11 +0000 (0:00:00.215) 0:00:31.201 ********* 2026-03-17 00:45:12.782252 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:45:12.782261 | orchestrator | 2026-03-17 00:45:12.782270 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:45:12.782280 | orchestrator | Tuesday 17 March 2026 00:45:11 +0000 (0:00:00.197) 0:00:31.399 ********* 2026-03-17 00:45:12.782290 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-03-17 00:45:12.782309 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-03-17 00:45:12.782319 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-03-17 00:45:12.782329 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-03-17 00:45:12.782338 | orchestrator | 2026-03-17 00:45:12.782348 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:45:12.782357 | orchestrator | Tuesday 17 March 2026 00:45:11 +0000 (0:00:00.635) 0:00:32.034 ********* 2026-03-17 00:45:12.782367 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:45:12.782376 | orchestrator | 2026-03-17 00:45:12.782386 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:45:12.782395 | orchestrator | Tuesday 17 March 2026 00:45:12 +0000 (0:00:00.216) 0:00:32.251 ********* 2026-03-17 00:45:12.782405 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:45:12.782414 | orchestrator | 2026-03-17 00:45:12.782424 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:45:12.782433 | orchestrator | Tuesday 17 March 2026 00:45:12 +0000 (0:00:00.223) 0:00:32.474 ********* 2026-03-17 00:45:12.782442 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:45:12.782452 | orchestrator | 2026-03-17 00:45:12.782461 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:45:12.782471 | orchestrator | Tuesday 17 March 2026 00:45:12 +0000 (0:00:00.196) 0:00:32.671 ********* 2026-03-17 00:45:12.782481 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:45:12.782490 | orchestrator | 2026-03-17 00:45:12.782510 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-17 00:45:16.834499 | orchestrator | Tuesday 17 March 2026 00:45:12 +0000 (0:00:00.194) 0:00:32.865 ********* 2026-03-17 00:45:16.834601 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-03-17 00:45:16.834621 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-03-17 00:45:16.834628 | orchestrator | 2026-03-17 00:45:16.834636 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-17 00:45:16.834642 | orchestrator | Tuesday 17 March 2026 00:45:13 +0000 (0:00:00.229) 0:00:33.095 ********* 2026-03-17 00:45:16.834649 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:45:16.834656 | orchestrator | 2026-03-17 00:45:16.834662 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-17 00:45:16.834668 | orchestrator | Tuesday 17 March 2026 00:45:13 +0000 (0:00:00.131) 0:00:33.227 ********* 2026-03-17 00:45:16.834690 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:45:16.834697 | orchestrator | 2026-03-17 00:45:16.834703 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-17 00:45:16.834710 | orchestrator | Tuesday 17 March 2026 00:45:13 +0000 (0:00:00.124) 0:00:33.352 ********* 2026-03-17 00:45:16.834716 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:45:16.834722 | orchestrator | 2026-03-17 00:45:16.834729 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-17 00:45:16.834736 | orchestrator | Tuesday 17 March 2026 00:45:13 +0000 (0:00:00.142) 0:00:33.495 ********* 2026-03-17 00:45:16.834744 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:45:16.834755 | orchestrator | 2026-03-17 00:45:16.834767 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-17 00:45:16.834778 | orchestrator | Tuesday 17 March 2026 00:45:13 +0000 (0:00:00.413) 0:00:33.908 ********* 2026-03-17 00:45:16.834789 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3c41c00e-01b2-5de9-9d7e-31888b7f9771'}}) 2026-03-17 00:45:16.834805 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b1b21aa2-16de-5cd3-9497-37bc0f66c5a5'}}) 2026-03-17 00:45:16.834815 | orchestrator | 2026-03-17 00:45:16.834826 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-17 00:45:16.834833 | orchestrator | Tuesday 17 March 2026 00:45:14 +0000 (0:00:00.211) 0:00:34.120 ********* 2026-03-17 00:45:16.834840 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3c41c00e-01b2-5de9-9d7e-31888b7f9771'}})  2026-03-17 00:45:16.834869 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b1b21aa2-16de-5cd3-9497-37bc0f66c5a5'}})  2026-03-17 00:45:16.834876 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:45:16.834882 | orchestrator | 2026-03-17 00:45:16.834888 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-17 00:45:16.834894 | orchestrator | Tuesday 17 March 2026 00:45:14 +0000 (0:00:00.165) 0:00:34.285 ********* 2026-03-17 00:45:16.834900 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3c41c00e-01b2-5de9-9d7e-31888b7f9771'}})  2026-03-17 00:45:16.834909 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b1b21aa2-16de-5cd3-9497-37bc0f66c5a5'}})  2026-03-17 00:45:16.834919 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:45:16.834929 | orchestrator | 2026-03-17 00:45:16.834939 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-17 00:45:16.834988 | orchestrator | Tuesday 17 March 2026 00:45:14 +0000 (0:00:00.212) 0:00:34.497 ********* 2026-03-17 00:45:16.835005 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3c41c00e-01b2-5de9-9d7e-31888b7f9771'}})  2026-03-17 00:45:16.835012 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b1b21aa2-16de-5cd3-9497-37bc0f66c5a5'}})  2026-03-17 00:45:16.835018 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:45:16.835024 | orchestrator | 2026-03-17 00:45:16.835030 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-17 00:45:16.835036 | orchestrator | Tuesday 17 March 2026 00:45:14 +0000 (0:00:00.224) 0:00:34.722 ********* 2026-03-17 00:45:16.835043 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:45:16.835049 | orchestrator | 2026-03-17 00:45:16.835055 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-17 00:45:16.835061 | orchestrator | Tuesday 17 March 2026 00:45:14 +0000 (0:00:00.124) 0:00:34.847 ********* 2026-03-17 00:45:16.835067 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:45:16.835073 | orchestrator | 2026-03-17 00:45:16.835080 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-17 00:45:16.835086 | orchestrator | Tuesday 17 March 2026 00:45:14 +0000 (0:00:00.108) 0:00:34.955 ********* 2026-03-17 00:45:16.835092 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:45:16.835098 | orchestrator | 2026-03-17 00:45:16.835104 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-17 00:45:16.835111 | orchestrator | Tuesday 17 March 2026 00:45:14 +0000 (0:00:00.113) 0:00:35.068 ********* 2026-03-17 00:45:16.835117 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:45:16.835123 | orchestrator | 2026-03-17 00:45:16.835129 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-17 00:45:16.835135 | orchestrator | Tuesday 17 March 2026 00:45:15 +0000 (0:00:00.119) 0:00:35.188 ********* 2026-03-17 00:45:16.835141 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:45:16.835147 | orchestrator | 2026-03-17 00:45:16.835153 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-17 00:45:16.835160 | orchestrator | Tuesday 17 March 2026 00:45:15 +0000 (0:00:00.245) 0:00:35.434 ********* 2026-03-17 00:45:16.835166 | orchestrator | ok: [testbed-node-5] => { 2026-03-17 00:45:16.835172 | orchestrator |  "ceph_osd_devices": { 2026-03-17 00:45:16.835179 | orchestrator |  "sdb": { 2026-03-17 00:45:16.835202 | orchestrator |  "osd_lvm_uuid": "3c41c00e-01b2-5de9-9d7e-31888b7f9771" 2026-03-17 00:45:16.835209 | orchestrator |  }, 2026-03-17 00:45:16.835216 | orchestrator |  "sdc": { 2026-03-17 00:45:16.835222 | orchestrator |  "osd_lvm_uuid": "b1b21aa2-16de-5cd3-9497-37bc0f66c5a5" 2026-03-17 00:45:16.835228 | orchestrator |  } 2026-03-17 00:45:16.835235 | orchestrator |  } 2026-03-17 00:45:16.835241 | orchestrator | } 2026-03-17 00:45:16.835248 | orchestrator | 2026-03-17 00:45:16.835319 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-17 00:45:16.835326 | orchestrator | Tuesday 17 March 2026 00:45:15 +0000 (0:00:00.109) 0:00:35.543 ********* 2026-03-17 00:45:16.835332 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:45:16.835339 | orchestrator | 2026-03-17 00:45:16.835345 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-17 00:45:16.835351 | orchestrator | Tuesday 17 March 2026 00:45:15 +0000 (0:00:00.083) 0:00:35.626 ********* 2026-03-17 00:45:16.835357 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:45:16.835363 | orchestrator | 2026-03-17 00:45:16.835369 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-17 00:45:16.835375 | orchestrator | Tuesday 17 March 2026 00:45:15 +0000 (0:00:00.232) 0:00:35.859 ********* 2026-03-17 00:45:16.835382 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:45:16.835388 | orchestrator | 2026-03-17 00:45:16.835394 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-17 00:45:16.835400 | orchestrator | Tuesday 17 March 2026 00:45:15 +0000 (0:00:00.095) 0:00:35.954 ********* 2026-03-17 00:45:16.835406 | orchestrator | changed: [testbed-node-5] => { 2026-03-17 00:45:16.835412 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-17 00:45:16.835419 | orchestrator |  "ceph_osd_devices": { 2026-03-17 00:45:16.835425 | orchestrator |  "sdb": { 2026-03-17 00:45:16.835431 | orchestrator |  "osd_lvm_uuid": "3c41c00e-01b2-5de9-9d7e-31888b7f9771" 2026-03-17 00:45:16.835438 | orchestrator |  }, 2026-03-17 00:45:16.835444 | orchestrator |  "sdc": { 2026-03-17 00:45:16.835450 | orchestrator |  "osd_lvm_uuid": "b1b21aa2-16de-5cd3-9497-37bc0f66c5a5" 2026-03-17 00:45:16.835456 | orchestrator |  } 2026-03-17 00:45:16.835463 | orchestrator |  }, 2026-03-17 00:45:16.835469 | orchestrator |  "lvm_volumes": [ 2026-03-17 00:45:16.835475 | orchestrator |  { 2026-03-17 00:45:16.835482 | orchestrator |  "data": "osd-block-3c41c00e-01b2-5de9-9d7e-31888b7f9771", 2026-03-17 00:45:16.835488 | orchestrator |  "data_vg": "ceph-3c41c00e-01b2-5de9-9d7e-31888b7f9771" 2026-03-17 00:45:16.835494 | orchestrator |  }, 2026-03-17 00:45:16.835503 | orchestrator |  { 2026-03-17 00:45:16.835510 | orchestrator |  "data": "osd-block-b1b21aa2-16de-5cd3-9497-37bc0f66c5a5", 2026-03-17 00:45:16.835516 | orchestrator |  "data_vg": "ceph-b1b21aa2-16de-5cd3-9497-37bc0f66c5a5" 2026-03-17 00:45:16.835522 | orchestrator |  } 2026-03-17 00:45:16.835528 | orchestrator |  ] 2026-03-17 00:45:16.835534 | orchestrator |  } 2026-03-17 00:45:16.835541 | orchestrator | } 2026-03-17 00:45:16.835547 | orchestrator | 2026-03-17 00:45:16.835553 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-17 00:45:16.835559 | orchestrator | Tuesday 17 March 2026 00:45:16 +0000 (0:00:00.174) 0:00:36.129 ********* 2026-03-17 00:45:16.835566 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-17 00:45:16.835572 | orchestrator | 2026-03-17 00:45:16.835578 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:45:16.835584 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-17 00:45:16.835592 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-17 00:45:16.835599 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-17 00:45:16.835605 | orchestrator | 2026-03-17 00:45:16.835611 | orchestrator | 2026-03-17 00:45:16.835617 | orchestrator | 2026-03-17 00:45:16.835623 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:45:16.835629 | orchestrator | Tuesday 17 March 2026 00:45:16 +0000 (0:00:00.772) 0:00:36.902 ********* 2026-03-17 00:45:16.835641 | orchestrator | =============================================================================== 2026-03-17 00:45:16.835648 | orchestrator | Write configuration file ------------------------------------------------ 3.47s 2026-03-17 00:45:16.835654 | orchestrator | Add known partitions to the list of available block devices ------------- 1.04s 2026-03-17 00:45:16.835666 | orchestrator | Add known links to the list of available block devices ------------------ 0.99s 2026-03-17 00:45:16.835672 | orchestrator | Get initial list of available block devices ----------------------------- 0.92s 2026-03-17 00:45:16.835679 | orchestrator | Add known partitions to the list of available block devices ------------- 0.82s 2026-03-17 00:45:16.835685 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.78s 2026-03-17 00:45:16.835691 | orchestrator | Print configuration data ------------------------------------------------ 0.77s 2026-03-17 00:45:16.835697 | orchestrator | Define lvm_volumes structures ------------------------------------------- 0.65s 2026-03-17 00:45:16.835703 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.64s 2026-03-17 00:45:16.835709 | orchestrator | Add known partitions to the list of available block devices ------------- 0.64s 2026-03-17 00:45:16.835716 | orchestrator | Add known links to the list of available block devices ------------------ 0.63s 2026-03-17 00:45:16.835722 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.60s 2026-03-17 00:45:16.835728 | orchestrator | Add known links to the list of available block devices ------------------ 0.60s 2026-03-17 00:45:16.835739 | orchestrator | Add known partitions to the list of available block devices ------------- 0.58s 2026-03-17 00:45:17.043620 | orchestrator | Add known links to the list of available block devices ------------------ 0.55s 2026-03-17 00:45:17.043703 | orchestrator | Add known links to the list of available block devices ------------------ 0.54s 2026-03-17 00:45:17.043712 | orchestrator | Add known partitions to the list of available block devices ------------- 0.53s 2026-03-17 00:45:17.043719 | orchestrator | Add known links to the list of available block devices ------------------ 0.52s 2026-03-17 00:45:17.043727 | orchestrator | Set DB+WAL devices config data ------------------------------------------ 0.51s 2026-03-17 00:45:17.043735 | orchestrator | Add known partitions to the list of available block devices ------------- 0.51s 2026-03-17 00:45:38.613495 | orchestrator | 2026-03-17 00:45:38 | INFO  | Task 42e8116e-c51a-4151-9467-bbd64c253cad (sync inventory) is running in background. Output coming soon. 2026-03-17 00:46:06.335653 | orchestrator | 2026-03-17 00:45:39 | INFO  | Starting group_vars file reorganization 2026-03-17 00:46:06.335757 | orchestrator | 2026-03-17 00:45:39 | INFO  | Moved 0 file(s) to their respective directories 2026-03-17 00:46:06.335770 | orchestrator | 2026-03-17 00:45:39 | INFO  | Group_vars file reorganization completed 2026-03-17 00:46:06.335780 | orchestrator | 2026-03-17 00:45:42 | INFO  | Starting variable preparation from inventory 2026-03-17 00:46:06.335790 | orchestrator | 2026-03-17 00:45:45 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-03-17 00:46:06.335799 | orchestrator | 2026-03-17 00:45:45 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-03-17 00:46:06.335825 | orchestrator | 2026-03-17 00:45:45 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-03-17 00:46:06.335834 | orchestrator | 2026-03-17 00:45:45 | INFO  | 3 file(s) written, 6 host(s) processed 2026-03-17 00:46:06.335843 | orchestrator | 2026-03-17 00:45:45 | INFO  | Variable preparation completed 2026-03-17 00:46:06.335852 | orchestrator | 2026-03-17 00:45:46 | INFO  | Starting inventory overwrite handling 2026-03-17 00:46:06.335861 | orchestrator | 2026-03-17 00:45:46 | INFO  | Handling group overwrites in 99-overwrite 2026-03-17 00:46:06.335870 | orchestrator | 2026-03-17 00:45:46 | INFO  | Removing group frr:children from 60-generic 2026-03-17 00:46:06.335899 | orchestrator | 2026-03-17 00:45:46 | INFO  | Removing group netbird:children from 50-infrastructure 2026-03-17 00:46:06.335909 | orchestrator | 2026-03-17 00:45:46 | INFO  | Removing group ceph-rgw from 50-ceph 2026-03-17 00:46:06.335918 | orchestrator | 2026-03-17 00:45:46 | INFO  | Removing group ceph-mds from 50-ceph 2026-03-17 00:46:06.335926 | orchestrator | 2026-03-17 00:45:46 | INFO  | Handling group overwrites in 20-roles 2026-03-17 00:46:06.335935 | orchestrator | 2026-03-17 00:45:46 | INFO  | Removing group k3s_node from 50-infrastructure 2026-03-17 00:46:06.335944 | orchestrator | 2026-03-17 00:45:46 | INFO  | Removed 5 group(s) in total 2026-03-17 00:46:06.335953 | orchestrator | 2026-03-17 00:45:46 | INFO  | Inventory overwrite handling completed 2026-03-17 00:46:06.335961 | orchestrator | 2026-03-17 00:45:47 | INFO  | Starting merge of inventory files 2026-03-17 00:46:06.335970 | orchestrator | 2026-03-17 00:45:47 | INFO  | Inventory files merged successfully 2026-03-17 00:46:06.335979 | orchestrator | 2026-03-17 00:45:52 | INFO  | Generating minified hosts file 2026-03-17 00:46:06.335987 | orchestrator | 2026-03-17 00:45:54 | INFO  | Successfully wrote minified hosts file to /inventory.merge/hosts-minified.yml 2026-03-17 00:46:06.335997 | orchestrator | 2026-03-17 00:45:54 | INFO  | Successfully wrote fast inventory to /inventory.merge/fast/hosts.json 2026-03-17 00:46:06.336006 | orchestrator | 2026-03-17 00:45:55 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-03-17 00:46:06.336014 | orchestrator | 2026-03-17 00:46:05 | INFO  | Successfully wrote ClusterShell configuration 2026-03-17 00:46:06.336023 | orchestrator | [master 9325522] 2026-03-17-00-46 2026-03-17 00:46:06.336033 | orchestrator | 5 files changed, 75 insertions(+), 10 deletions(-) 2026-03-17 00:46:06.336043 | orchestrator | create mode 100644 fast/host_vars/testbed-node-3/ceph-lvm-configuration.yml 2026-03-17 00:46:06.336051 | orchestrator | create mode 100644 fast/host_vars/testbed-node-4/ceph-lvm-configuration.yml 2026-03-17 00:46:06.336060 | orchestrator | create mode 100644 fast/host_vars/testbed-node-5/ceph-lvm-configuration.yml 2026-03-17 00:46:07.535776 | orchestrator | 2026-03-17 00:46:07 | INFO  | Prepare task for execution of ceph-create-lvm-devices. 2026-03-17 00:46:07.591690 | orchestrator | 2026-03-17 00:46:07 | INFO  | Task 08305b1f-5a90-46e4-8dc0-6068c5c82228 (ceph-create-lvm-devices) was prepared for execution. 2026-03-17 00:46:07.591802 | orchestrator | 2026-03-17 00:46:07 | INFO  | It takes a moment until task 08305b1f-5a90-46e4-8dc0-6068c5c82228 (ceph-create-lvm-devices) has been started and output is visible here. 2026-03-17 00:46:18.282051 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-17 00:46:18.282133 | orchestrator | 2.16.14 2026-03-17 00:46:18.282141 | orchestrator | 2026-03-17 00:46:18.282148 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-17 00:46:18.282154 | orchestrator | 2026-03-17 00:46:18.282198 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-17 00:46:18.282204 | orchestrator | Tuesday 17 March 2026 00:46:11 +0000 (0:00:00.210) 0:00:00.210 ********* 2026-03-17 00:46:18.282210 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-17 00:46:18.282215 | orchestrator | 2026-03-17 00:46:18.282219 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-17 00:46:18.282224 | orchestrator | Tuesday 17 March 2026 00:46:11 +0000 (0:00:00.225) 0:00:00.435 ********* 2026-03-17 00:46:18.282229 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:46:18.282234 | orchestrator | 2026-03-17 00:46:18.282239 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:46:18.282244 | orchestrator | Tuesday 17 March 2026 00:46:11 +0000 (0:00:00.189) 0:00:00.625 ********* 2026-03-17 00:46:18.282264 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-03-17 00:46:18.282269 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-03-17 00:46:18.282273 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-03-17 00:46:18.282280 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-03-17 00:46:18.282288 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-03-17 00:46:18.282295 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-03-17 00:46:18.282302 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-03-17 00:46:18.282310 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-03-17 00:46:18.282318 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-03-17 00:46:18.282327 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-03-17 00:46:18.282335 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-03-17 00:46:18.282343 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-03-17 00:46:18.282351 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-03-17 00:46:18.282358 | orchestrator | 2026-03-17 00:46:18.282364 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:46:18.282368 | orchestrator | Tuesday 17 March 2026 00:46:11 +0000 (0:00:00.353) 0:00:00.978 ********* 2026-03-17 00:46:18.282373 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:18.282378 | orchestrator | 2026-03-17 00:46:18.282382 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:46:18.282387 | orchestrator | Tuesday 17 March 2026 00:46:12 +0000 (0:00:00.345) 0:00:01.323 ********* 2026-03-17 00:46:18.282392 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:18.282396 | orchestrator | 2026-03-17 00:46:18.282401 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:46:18.282405 | orchestrator | Tuesday 17 March 2026 00:46:12 +0000 (0:00:00.163) 0:00:01.487 ********* 2026-03-17 00:46:18.282422 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:18.282427 | orchestrator | 2026-03-17 00:46:18.282432 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:46:18.282436 | orchestrator | Tuesday 17 March 2026 00:46:12 +0000 (0:00:00.163) 0:00:01.650 ********* 2026-03-17 00:46:18.282441 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:18.282445 | orchestrator | 2026-03-17 00:46:18.282450 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:46:18.282454 | orchestrator | Tuesday 17 March 2026 00:46:12 +0000 (0:00:00.161) 0:00:01.812 ********* 2026-03-17 00:46:18.282459 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:18.282463 | orchestrator | 2026-03-17 00:46:18.282468 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:46:18.282472 | orchestrator | Tuesday 17 March 2026 00:46:12 +0000 (0:00:00.162) 0:00:01.974 ********* 2026-03-17 00:46:18.282477 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:18.282481 | orchestrator | 2026-03-17 00:46:18.282486 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:46:18.282491 | orchestrator | Tuesday 17 March 2026 00:46:13 +0000 (0:00:00.207) 0:00:02.182 ********* 2026-03-17 00:46:18.282495 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:18.282500 | orchestrator | 2026-03-17 00:46:18.282504 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:46:18.282509 | orchestrator | Tuesday 17 March 2026 00:46:13 +0000 (0:00:00.170) 0:00:02.353 ********* 2026-03-17 00:46:18.282513 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:18.282568 | orchestrator | 2026-03-17 00:46:18.282573 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:46:18.282578 | orchestrator | Tuesday 17 March 2026 00:46:13 +0000 (0:00:00.209) 0:00:02.563 ********* 2026-03-17 00:46:18.282583 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_fef12aab-9308-4371-8ae3-fd48e064f393) 2026-03-17 00:46:18.282589 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_fef12aab-9308-4371-8ae3-fd48e064f393) 2026-03-17 00:46:18.282595 | orchestrator | 2026-03-17 00:46:18.282600 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:46:18.282618 | orchestrator | Tuesday 17 March 2026 00:46:13 +0000 (0:00:00.385) 0:00:02.948 ********* 2026-03-17 00:46:18.282624 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_f65971dd-3d8e-4ccb-8892-9cef1457b08b) 2026-03-17 00:46:18.282629 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_f65971dd-3d8e-4ccb-8892-9cef1457b08b) 2026-03-17 00:46:18.282635 | orchestrator | 2026-03-17 00:46:18.282640 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:46:18.282645 | orchestrator | Tuesday 17 March 2026 00:46:14 +0000 (0:00:00.391) 0:00:03.340 ********* 2026-03-17 00:46:18.282650 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_8140ca94-7747-4c81-b89b-0d83b2f23451) 2026-03-17 00:46:18.282656 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_8140ca94-7747-4c81-b89b-0d83b2f23451) 2026-03-17 00:46:18.282661 | orchestrator | 2026-03-17 00:46:18.282666 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:46:18.282671 | orchestrator | Tuesday 17 March 2026 00:46:14 +0000 (0:00:00.701) 0:00:04.042 ********* 2026-03-17 00:46:18.282676 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_1afbae95-f964-4c90-9c71-9e7629ff9c63) 2026-03-17 00:46:18.282680 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_1afbae95-f964-4c90-9c71-9e7629ff9c63) 2026-03-17 00:46:18.282685 | orchestrator | 2026-03-17 00:46:18.282689 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:46:18.282694 | orchestrator | Tuesday 17 March 2026 00:46:15 +0000 (0:00:00.699) 0:00:04.742 ********* 2026-03-17 00:46:18.282698 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-17 00:46:18.282703 | orchestrator | 2026-03-17 00:46:18.282708 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:46:18.282715 | orchestrator | Tuesday 17 March 2026 00:46:16 +0000 (0:00:00.795) 0:00:05.537 ********* 2026-03-17 00:46:18.282720 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-03-17 00:46:18.282725 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-03-17 00:46:18.282729 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-03-17 00:46:18.282734 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-03-17 00:46:18.282738 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-03-17 00:46:18.282743 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-03-17 00:46:18.282747 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-03-17 00:46:18.282752 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-03-17 00:46:18.282756 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-03-17 00:46:18.282761 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-03-17 00:46:18.282765 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-03-17 00:46:18.282770 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-03-17 00:46:18.282779 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-03-17 00:46:18.282783 | orchestrator | 2026-03-17 00:46:18.282788 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:46:18.282792 | orchestrator | Tuesday 17 March 2026 00:46:16 +0000 (0:00:00.423) 0:00:05.961 ********* 2026-03-17 00:46:18.282797 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:18.282801 | orchestrator | 2026-03-17 00:46:18.282806 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:46:18.282811 | orchestrator | Tuesday 17 March 2026 00:46:17 +0000 (0:00:00.223) 0:00:06.184 ********* 2026-03-17 00:46:18.282815 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:18.282820 | orchestrator | 2026-03-17 00:46:18.282824 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:46:18.282829 | orchestrator | Tuesday 17 March 2026 00:46:17 +0000 (0:00:00.196) 0:00:06.381 ********* 2026-03-17 00:46:18.282834 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:18.282838 | orchestrator | 2026-03-17 00:46:18.282843 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:46:18.282847 | orchestrator | Tuesday 17 March 2026 00:46:17 +0000 (0:00:00.193) 0:00:06.575 ********* 2026-03-17 00:46:18.282852 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:18.282856 | orchestrator | 2026-03-17 00:46:18.282861 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:46:18.282865 | orchestrator | Tuesday 17 March 2026 00:46:17 +0000 (0:00:00.204) 0:00:06.779 ********* 2026-03-17 00:46:18.282870 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:18.282874 | orchestrator | 2026-03-17 00:46:18.282879 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:46:18.282883 | orchestrator | Tuesday 17 March 2026 00:46:17 +0000 (0:00:00.216) 0:00:06.995 ********* 2026-03-17 00:46:18.282888 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:18.282892 | orchestrator | 2026-03-17 00:46:18.282897 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:46:18.282902 | orchestrator | Tuesday 17 March 2026 00:46:18 +0000 (0:00:00.229) 0:00:07.225 ********* 2026-03-17 00:46:18.282906 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:18.282911 | orchestrator | 2026-03-17 00:46:18.282918 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:46:26.418844 | orchestrator | Tuesday 17 March 2026 00:46:18 +0000 (0:00:00.230) 0:00:07.455 ********* 2026-03-17 00:46:26.418928 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:26.418937 | orchestrator | 2026-03-17 00:46:26.418943 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:46:26.418949 | orchestrator | Tuesday 17 March 2026 00:46:18 +0000 (0:00:00.209) 0:00:07.665 ********* 2026-03-17 00:46:26.418954 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-03-17 00:46:26.418960 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-03-17 00:46:26.418966 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-03-17 00:46:26.418972 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-03-17 00:46:26.418977 | orchestrator | 2026-03-17 00:46:26.418982 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:46:26.418987 | orchestrator | Tuesday 17 March 2026 00:46:19 +0000 (0:00:01.076) 0:00:08.741 ********* 2026-03-17 00:46:26.418992 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:26.418997 | orchestrator | 2026-03-17 00:46:26.419003 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:46:26.419008 | orchestrator | Tuesday 17 March 2026 00:46:19 +0000 (0:00:00.202) 0:00:08.944 ********* 2026-03-17 00:46:26.419013 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:26.419018 | orchestrator | 2026-03-17 00:46:26.419023 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:46:26.419044 | orchestrator | Tuesday 17 March 2026 00:46:19 +0000 (0:00:00.202) 0:00:09.146 ********* 2026-03-17 00:46:26.419049 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:26.419055 | orchestrator | 2026-03-17 00:46:26.419060 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:46:26.419065 | orchestrator | Tuesday 17 March 2026 00:46:20 +0000 (0:00:00.184) 0:00:09.331 ********* 2026-03-17 00:46:26.419070 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:26.419075 | orchestrator | 2026-03-17 00:46:26.419080 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-17 00:46:26.419085 | orchestrator | Tuesday 17 March 2026 00:46:20 +0000 (0:00:00.191) 0:00:09.523 ********* 2026-03-17 00:46:26.419090 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:26.419095 | orchestrator | 2026-03-17 00:46:26.419101 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-17 00:46:26.419106 | orchestrator | Tuesday 17 March 2026 00:46:20 +0000 (0:00:00.130) 0:00:09.653 ********* 2026-03-17 00:46:26.419111 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '45fdc78c-b598-5156-b36d-ba4cd7c12386'}}) 2026-03-17 00:46:26.419117 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2b5d6da3-626f-5c09-a421-20ac1510e3d2'}}) 2026-03-17 00:46:26.419122 | orchestrator | 2026-03-17 00:46:26.419127 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-17 00:46:26.419132 | orchestrator | Tuesday 17 March 2026 00:46:20 +0000 (0:00:00.183) 0:00:09.837 ********* 2026-03-17 00:46:26.419138 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-45fdc78c-b598-5156-b36d-ba4cd7c12386', 'data_vg': 'ceph-45fdc78c-b598-5156-b36d-ba4cd7c12386'}) 2026-03-17 00:46:26.419143 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-2b5d6da3-626f-5c09-a421-20ac1510e3d2', 'data_vg': 'ceph-2b5d6da3-626f-5c09-a421-20ac1510e3d2'}) 2026-03-17 00:46:26.419148 | orchestrator | 2026-03-17 00:46:26.419154 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-17 00:46:26.419159 | orchestrator | Tuesday 17 March 2026 00:46:22 +0000 (0:00:02.062) 0:00:11.900 ********* 2026-03-17 00:46:26.419164 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-45fdc78c-b598-5156-b36d-ba4cd7c12386', 'data_vg': 'ceph-45fdc78c-b598-5156-b36d-ba4cd7c12386'})  2026-03-17 00:46:26.419226 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2b5d6da3-626f-5c09-a421-20ac1510e3d2', 'data_vg': 'ceph-2b5d6da3-626f-5c09-a421-20ac1510e3d2'})  2026-03-17 00:46:26.419233 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:26.419239 | orchestrator | 2026-03-17 00:46:26.419244 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-17 00:46:26.419249 | orchestrator | Tuesday 17 March 2026 00:46:22 +0000 (0:00:00.146) 0:00:12.046 ********* 2026-03-17 00:46:26.419254 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-45fdc78c-b598-5156-b36d-ba4cd7c12386', 'data_vg': 'ceph-45fdc78c-b598-5156-b36d-ba4cd7c12386'}) 2026-03-17 00:46:26.419259 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-2b5d6da3-626f-5c09-a421-20ac1510e3d2', 'data_vg': 'ceph-2b5d6da3-626f-5c09-a421-20ac1510e3d2'}) 2026-03-17 00:46:26.419264 | orchestrator | 2026-03-17 00:46:26.419270 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-17 00:46:26.419275 | orchestrator | Tuesday 17 March 2026 00:46:24 +0000 (0:00:01.466) 0:00:13.512 ********* 2026-03-17 00:46:26.419280 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-45fdc78c-b598-5156-b36d-ba4cd7c12386', 'data_vg': 'ceph-45fdc78c-b598-5156-b36d-ba4cd7c12386'})  2026-03-17 00:46:26.419285 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2b5d6da3-626f-5c09-a421-20ac1510e3d2', 'data_vg': 'ceph-2b5d6da3-626f-5c09-a421-20ac1510e3d2'})  2026-03-17 00:46:26.419290 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:26.419295 | orchestrator | 2026-03-17 00:46:26.419300 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-17 00:46:26.419312 | orchestrator | Tuesday 17 March 2026 00:46:24 +0000 (0:00:00.152) 0:00:13.664 ********* 2026-03-17 00:46:26.419329 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:26.419335 | orchestrator | 2026-03-17 00:46:26.419340 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-17 00:46:26.419345 | orchestrator | Tuesday 17 March 2026 00:46:24 +0000 (0:00:00.155) 0:00:13.819 ********* 2026-03-17 00:46:26.419350 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-45fdc78c-b598-5156-b36d-ba4cd7c12386', 'data_vg': 'ceph-45fdc78c-b598-5156-b36d-ba4cd7c12386'})  2026-03-17 00:46:26.419355 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2b5d6da3-626f-5c09-a421-20ac1510e3d2', 'data_vg': 'ceph-2b5d6da3-626f-5c09-a421-20ac1510e3d2'})  2026-03-17 00:46:26.419360 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:26.419365 | orchestrator | 2026-03-17 00:46:26.419370 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-17 00:46:26.419376 | orchestrator | Tuesday 17 March 2026 00:46:25 +0000 (0:00:00.438) 0:00:14.258 ********* 2026-03-17 00:46:26.419381 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:26.419386 | orchestrator | 2026-03-17 00:46:26.419391 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-17 00:46:26.419396 | orchestrator | Tuesday 17 March 2026 00:46:25 +0000 (0:00:00.136) 0:00:14.395 ********* 2026-03-17 00:46:26.419401 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-45fdc78c-b598-5156-b36d-ba4cd7c12386', 'data_vg': 'ceph-45fdc78c-b598-5156-b36d-ba4cd7c12386'})  2026-03-17 00:46:26.419407 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2b5d6da3-626f-5c09-a421-20ac1510e3d2', 'data_vg': 'ceph-2b5d6da3-626f-5c09-a421-20ac1510e3d2'})  2026-03-17 00:46:26.419413 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:26.419419 | orchestrator | 2026-03-17 00:46:26.419428 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-17 00:46:26.419434 | orchestrator | Tuesday 17 March 2026 00:46:25 +0000 (0:00:00.155) 0:00:14.551 ********* 2026-03-17 00:46:26.419439 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:26.419445 | orchestrator | 2026-03-17 00:46:26.419450 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-17 00:46:26.419456 | orchestrator | Tuesday 17 March 2026 00:46:25 +0000 (0:00:00.133) 0:00:14.684 ********* 2026-03-17 00:46:26.419462 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-45fdc78c-b598-5156-b36d-ba4cd7c12386', 'data_vg': 'ceph-45fdc78c-b598-5156-b36d-ba4cd7c12386'})  2026-03-17 00:46:26.419468 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2b5d6da3-626f-5c09-a421-20ac1510e3d2', 'data_vg': 'ceph-2b5d6da3-626f-5c09-a421-20ac1510e3d2'})  2026-03-17 00:46:26.419474 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:26.419479 | orchestrator | 2026-03-17 00:46:26.419485 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-17 00:46:26.419491 | orchestrator | Tuesday 17 March 2026 00:46:25 +0000 (0:00:00.146) 0:00:14.831 ********* 2026-03-17 00:46:26.419497 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:46:26.419503 | orchestrator | 2026-03-17 00:46:26.419509 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-17 00:46:26.419515 | orchestrator | Tuesday 17 March 2026 00:46:25 +0000 (0:00:00.132) 0:00:14.963 ********* 2026-03-17 00:46:26.419521 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-45fdc78c-b598-5156-b36d-ba4cd7c12386', 'data_vg': 'ceph-45fdc78c-b598-5156-b36d-ba4cd7c12386'})  2026-03-17 00:46:26.419527 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2b5d6da3-626f-5c09-a421-20ac1510e3d2', 'data_vg': 'ceph-2b5d6da3-626f-5c09-a421-20ac1510e3d2'})  2026-03-17 00:46:26.419533 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:26.419538 | orchestrator | 2026-03-17 00:46:26.419544 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-17 00:46:26.419553 | orchestrator | Tuesday 17 March 2026 00:46:25 +0000 (0:00:00.150) 0:00:15.114 ********* 2026-03-17 00:46:26.419559 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-45fdc78c-b598-5156-b36d-ba4cd7c12386', 'data_vg': 'ceph-45fdc78c-b598-5156-b36d-ba4cd7c12386'})  2026-03-17 00:46:26.419565 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2b5d6da3-626f-5c09-a421-20ac1510e3d2', 'data_vg': 'ceph-2b5d6da3-626f-5c09-a421-20ac1510e3d2'})  2026-03-17 00:46:26.419571 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:26.419577 | orchestrator | 2026-03-17 00:46:26.419582 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-17 00:46:26.419588 | orchestrator | Tuesday 17 March 2026 00:46:26 +0000 (0:00:00.155) 0:00:15.270 ********* 2026-03-17 00:46:26.419594 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-45fdc78c-b598-5156-b36d-ba4cd7c12386', 'data_vg': 'ceph-45fdc78c-b598-5156-b36d-ba4cd7c12386'})  2026-03-17 00:46:26.419600 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2b5d6da3-626f-5c09-a421-20ac1510e3d2', 'data_vg': 'ceph-2b5d6da3-626f-5c09-a421-20ac1510e3d2'})  2026-03-17 00:46:26.419606 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:26.419611 | orchestrator | 2026-03-17 00:46:26.419617 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-17 00:46:26.419623 | orchestrator | Tuesday 17 March 2026 00:46:26 +0000 (0:00:00.171) 0:00:15.442 ********* 2026-03-17 00:46:26.419628 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:26.419634 | orchestrator | 2026-03-17 00:46:26.419640 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-17 00:46:26.419649 | orchestrator | Tuesday 17 March 2026 00:46:26 +0000 (0:00:00.153) 0:00:15.595 ********* 2026-03-17 00:46:32.868319 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:32.868413 | orchestrator | 2026-03-17 00:46:32.868423 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-17 00:46:32.868431 | orchestrator | Tuesday 17 March 2026 00:46:26 +0000 (0:00:00.164) 0:00:15.759 ********* 2026-03-17 00:46:32.868438 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:32.868445 | orchestrator | 2026-03-17 00:46:32.868452 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-17 00:46:32.868459 | orchestrator | Tuesday 17 March 2026 00:46:26 +0000 (0:00:00.150) 0:00:15.910 ********* 2026-03-17 00:46:32.868465 | orchestrator | ok: [testbed-node-3] => { 2026-03-17 00:46:32.868474 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-17 00:46:32.868481 | orchestrator | } 2026-03-17 00:46:32.868488 | orchestrator | 2026-03-17 00:46:32.868495 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-17 00:46:32.868502 | orchestrator | Tuesday 17 March 2026 00:46:27 +0000 (0:00:00.348) 0:00:16.259 ********* 2026-03-17 00:46:32.868508 | orchestrator | ok: [testbed-node-3] => { 2026-03-17 00:46:32.868515 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-17 00:46:32.868521 | orchestrator | } 2026-03-17 00:46:32.868528 | orchestrator | 2026-03-17 00:46:32.868534 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-17 00:46:32.868541 | orchestrator | Tuesday 17 March 2026 00:46:27 +0000 (0:00:00.147) 0:00:16.406 ********* 2026-03-17 00:46:32.868547 | orchestrator | ok: [testbed-node-3] => { 2026-03-17 00:46:32.868554 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-17 00:46:32.868560 | orchestrator | } 2026-03-17 00:46:32.868567 | orchestrator | 2026-03-17 00:46:32.868573 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-17 00:46:32.868580 | orchestrator | Tuesday 17 March 2026 00:46:27 +0000 (0:00:00.150) 0:00:16.556 ********* 2026-03-17 00:46:32.868586 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:46:32.868593 | orchestrator | 2026-03-17 00:46:32.868599 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-17 00:46:32.868606 | orchestrator | Tuesday 17 March 2026 00:46:28 +0000 (0:00:00.666) 0:00:17.223 ********* 2026-03-17 00:46:32.868630 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:46:32.868637 | orchestrator | 2026-03-17 00:46:32.868643 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-17 00:46:32.868650 | orchestrator | Tuesday 17 March 2026 00:46:28 +0000 (0:00:00.534) 0:00:17.758 ********* 2026-03-17 00:46:32.868656 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:46:32.868663 | orchestrator | 2026-03-17 00:46:32.868669 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-17 00:46:32.868676 | orchestrator | Tuesday 17 March 2026 00:46:29 +0000 (0:00:00.579) 0:00:18.338 ********* 2026-03-17 00:46:32.868682 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:46:32.868689 | orchestrator | 2026-03-17 00:46:32.868695 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-17 00:46:32.868701 | orchestrator | Tuesday 17 March 2026 00:46:29 +0000 (0:00:00.157) 0:00:18.495 ********* 2026-03-17 00:46:32.868708 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:32.868714 | orchestrator | 2026-03-17 00:46:32.868720 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-17 00:46:32.868727 | orchestrator | Tuesday 17 March 2026 00:46:29 +0000 (0:00:00.108) 0:00:18.604 ********* 2026-03-17 00:46:32.868733 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:32.868739 | orchestrator | 2026-03-17 00:46:32.868746 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-17 00:46:32.868752 | orchestrator | Tuesday 17 March 2026 00:46:29 +0000 (0:00:00.115) 0:00:18.720 ********* 2026-03-17 00:46:32.868758 | orchestrator | ok: [testbed-node-3] => { 2026-03-17 00:46:32.868765 | orchestrator |  "vgs_report": { 2026-03-17 00:46:32.868772 | orchestrator |  "vg": [] 2026-03-17 00:46:32.868778 | orchestrator |  } 2026-03-17 00:46:32.868785 | orchestrator | } 2026-03-17 00:46:32.868791 | orchestrator | 2026-03-17 00:46:32.868798 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-17 00:46:32.868804 | orchestrator | Tuesday 17 March 2026 00:46:29 +0000 (0:00:00.143) 0:00:18.863 ********* 2026-03-17 00:46:32.868810 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:32.868817 | orchestrator | 2026-03-17 00:46:32.868823 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-17 00:46:32.868830 | orchestrator | Tuesday 17 March 2026 00:46:29 +0000 (0:00:00.130) 0:00:18.993 ********* 2026-03-17 00:46:32.868836 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:32.868843 | orchestrator | 2026-03-17 00:46:32.868853 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-17 00:46:32.868863 | orchestrator | Tuesday 17 March 2026 00:46:29 +0000 (0:00:00.120) 0:00:19.114 ********* 2026-03-17 00:46:32.868874 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:32.868884 | orchestrator | 2026-03-17 00:46:32.868895 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-17 00:46:32.868906 | orchestrator | Tuesday 17 March 2026 00:46:30 +0000 (0:00:00.312) 0:00:19.426 ********* 2026-03-17 00:46:32.868917 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:32.868928 | orchestrator | 2026-03-17 00:46:32.868939 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-17 00:46:32.868948 | orchestrator | Tuesday 17 March 2026 00:46:30 +0000 (0:00:00.128) 0:00:19.554 ********* 2026-03-17 00:46:32.868959 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:32.868969 | orchestrator | 2026-03-17 00:46:32.868979 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-17 00:46:32.868990 | orchestrator | Tuesday 17 March 2026 00:46:30 +0000 (0:00:00.132) 0:00:19.687 ********* 2026-03-17 00:46:32.868999 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:32.869010 | orchestrator | 2026-03-17 00:46:32.869019 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-17 00:46:32.869030 | orchestrator | Tuesday 17 March 2026 00:46:30 +0000 (0:00:00.137) 0:00:19.824 ********* 2026-03-17 00:46:32.869041 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:32.869057 | orchestrator | 2026-03-17 00:46:32.869067 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-17 00:46:32.869078 | orchestrator | Tuesday 17 March 2026 00:46:30 +0000 (0:00:00.130) 0:00:19.954 ********* 2026-03-17 00:46:32.869102 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:32.869112 | orchestrator | 2026-03-17 00:46:32.869139 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-17 00:46:32.869146 | orchestrator | Tuesday 17 March 2026 00:46:30 +0000 (0:00:00.141) 0:00:20.096 ********* 2026-03-17 00:46:32.869152 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:32.869159 | orchestrator | 2026-03-17 00:46:32.869165 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-17 00:46:32.869171 | orchestrator | Tuesday 17 March 2026 00:46:31 +0000 (0:00:00.129) 0:00:20.226 ********* 2026-03-17 00:46:32.869178 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:32.869185 | orchestrator | 2026-03-17 00:46:32.869191 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-17 00:46:32.869198 | orchestrator | Tuesday 17 March 2026 00:46:31 +0000 (0:00:00.142) 0:00:20.368 ********* 2026-03-17 00:46:32.869219 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:32.869226 | orchestrator | 2026-03-17 00:46:32.869232 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-17 00:46:32.869239 | orchestrator | Tuesday 17 March 2026 00:46:31 +0000 (0:00:00.130) 0:00:20.499 ********* 2026-03-17 00:46:32.869245 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:32.869252 | orchestrator | 2026-03-17 00:46:32.869258 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-17 00:46:32.869265 | orchestrator | Tuesday 17 March 2026 00:46:31 +0000 (0:00:00.126) 0:00:20.625 ********* 2026-03-17 00:46:32.869271 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:32.869277 | orchestrator | 2026-03-17 00:46:32.869283 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-17 00:46:32.869290 | orchestrator | Tuesday 17 March 2026 00:46:31 +0000 (0:00:00.133) 0:00:20.759 ********* 2026-03-17 00:46:32.869296 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:32.869302 | orchestrator | 2026-03-17 00:46:32.869312 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-17 00:46:32.869319 | orchestrator | Tuesday 17 March 2026 00:46:31 +0000 (0:00:00.124) 0:00:20.883 ********* 2026-03-17 00:46:32.869327 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-45fdc78c-b598-5156-b36d-ba4cd7c12386', 'data_vg': 'ceph-45fdc78c-b598-5156-b36d-ba4cd7c12386'})  2026-03-17 00:46:32.869335 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2b5d6da3-626f-5c09-a421-20ac1510e3d2', 'data_vg': 'ceph-2b5d6da3-626f-5c09-a421-20ac1510e3d2'})  2026-03-17 00:46:32.869342 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:32.869348 | orchestrator | 2026-03-17 00:46:32.869354 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-17 00:46:32.869361 | orchestrator | Tuesday 17 March 2026 00:46:31 +0000 (0:00:00.192) 0:00:21.076 ********* 2026-03-17 00:46:32.869367 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-45fdc78c-b598-5156-b36d-ba4cd7c12386', 'data_vg': 'ceph-45fdc78c-b598-5156-b36d-ba4cd7c12386'})  2026-03-17 00:46:32.869373 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2b5d6da3-626f-5c09-a421-20ac1510e3d2', 'data_vg': 'ceph-2b5d6da3-626f-5c09-a421-20ac1510e3d2'})  2026-03-17 00:46:32.869380 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:32.869386 | orchestrator | 2026-03-17 00:46:32.869393 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-17 00:46:32.869399 | orchestrator | Tuesday 17 March 2026 00:46:32 +0000 (0:00:00.392) 0:00:21.468 ********* 2026-03-17 00:46:32.869406 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-45fdc78c-b598-5156-b36d-ba4cd7c12386', 'data_vg': 'ceph-45fdc78c-b598-5156-b36d-ba4cd7c12386'})  2026-03-17 00:46:32.869412 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2b5d6da3-626f-5c09-a421-20ac1510e3d2', 'data_vg': 'ceph-2b5d6da3-626f-5c09-a421-20ac1510e3d2'})  2026-03-17 00:46:32.869424 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:32.869430 | orchestrator | 2026-03-17 00:46:32.869436 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-17 00:46:32.869443 | orchestrator | Tuesday 17 March 2026 00:46:32 +0000 (0:00:00.194) 0:00:21.663 ********* 2026-03-17 00:46:32.869449 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-45fdc78c-b598-5156-b36d-ba4cd7c12386', 'data_vg': 'ceph-45fdc78c-b598-5156-b36d-ba4cd7c12386'})  2026-03-17 00:46:32.869456 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2b5d6da3-626f-5c09-a421-20ac1510e3d2', 'data_vg': 'ceph-2b5d6da3-626f-5c09-a421-20ac1510e3d2'})  2026-03-17 00:46:32.869462 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:32.869468 | orchestrator | 2026-03-17 00:46:32.869475 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-17 00:46:32.869481 | orchestrator | Tuesday 17 March 2026 00:46:32 +0000 (0:00:00.161) 0:00:21.824 ********* 2026-03-17 00:46:32.869487 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-45fdc78c-b598-5156-b36d-ba4cd7c12386', 'data_vg': 'ceph-45fdc78c-b598-5156-b36d-ba4cd7c12386'})  2026-03-17 00:46:32.869494 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2b5d6da3-626f-5c09-a421-20ac1510e3d2', 'data_vg': 'ceph-2b5d6da3-626f-5c09-a421-20ac1510e3d2'})  2026-03-17 00:46:32.869500 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:32.869507 | orchestrator | 2026-03-17 00:46:32.869513 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-17 00:46:32.869520 | orchestrator | Tuesday 17 March 2026 00:46:32 +0000 (0:00:00.156) 0:00:21.981 ********* 2026-03-17 00:46:32.869530 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-45fdc78c-b598-5156-b36d-ba4cd7c12386', 'data_vg': 'ceph-45fdc78c-b598-5156-b36d-ba4cd7c12386'})  2026-03-17 00:46:38.642564 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2b5d6da3-626f-5c09-a421-20ac1510e3d2', 'data_vg': 'ceph-2b5d6da3-626f-5c09-a421-20ac1510e3d2'})  2026-03-17 00:46:38.642679 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:38.642692 | orchestrator | 2026-03-17 00:46:38.642701 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-17 00:46:38.642710 | orchestrator | Tuesday 17 March 2026 00:46:32 +0000 (0:00:00.152) 0:00:22.133 ********* 2026-03-17 00:46:38.642767 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-45fdc78c-b598-5156-b36d-ba4cd7c12386', 'data_vg': 'ceph-45fdc78c-b598-5156-b36d-ba4cd7c12386'})  2026-03-17 00:46:38.642777 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2b5d6da3-626f-5c09-a421-20ac1510e3d2', 'data_vg': 'ceph-2b5d6da3-626f-5c09-a421-20ac1510e3d2'})  2026-03-17 00:46:38.642784 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:38.642791 | orchestrator | 2026-03-17 00:46:38.642798 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-17 00:46:38.642805 | orchestrator | Tuesday 17 March 2026 00:46:33 +0000 (0:00:00.141) 0:00:22.275 ********* 2026-03-17 00:46:38.642812 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-45fdc78c-b598-5156-b36d-ba4cd7c12386', 'data_vg': 'ceph-45fdc78c-b598-5156-b36d-ba4cd7c12386'})  2026-03-17 00:46:38.642834 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2b5d6da3-626f-5c09-a421-20ac1510e3d2', 'data_vg': 'ceph-2b5d6da3-626f-5c09-a421-20ac1510e3d2'})  2026-03-17 00:46:38.642841 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:38.642848 | orchestrator | 2026-03-17 00:46:38.642855 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-17 00:46:38.642861 | orchestrator | Tuesday 17 March 2026 00:46:33 +0000 (0:00:00.165) 0:00:22.440 ********* 2026-03-17 00:46:38.642868 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:46:38.642875 | orchestrator | 2026-03-17 00:46:38.642901 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-17 00:46:38.642908 | orchestrator | Tuesday 17 March 2026 00:46:33 +0000 (0:00:00.550) 0:00:22.991 ********* 2026-03-17 00:46:38.642915 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:46:38.642921 | orchestrator | 2026-03-17 00:46:38.642928 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-17 00:46:38.642934 | orchestrator | Tuesday 17 March 2026 00:46:34 +0000 (0:00:00.541) 0:00:23.533 ********* 2026-03-17 00:46:38.642941 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:46:38.642948 | orchestrator | 2026-03-17 00:46:38.642954 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-17 00:46:38.642961 | orchestrator | Tuesday 17 March 2026 00:46:34 +0000 (0:00:00.192) 0:00:23.725 ********* 2026-03-17 00:46:38.642968 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-2b5d6da3-626f-5c09-a421-20ac1510e3d2', 'vg_name': 'ceph-2b5d6da3-626f-5c09-a421-20ac1510e3d2'}) 2026-03-17 00:46:38.642976 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-45fdc78c-b598-5156-b36d-ba4cd7c12386', 'vg_name': 'ceph-45fdc78c-b598-5156-b36d-ba4cd7c12386'}) 2026-03-17 00:46:38.642982 | orchestrator | 2026-03-17 00:46:38.642990 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-17 00:46:38.642996 | orchestrator | Tuesday 17 March 2026 00:46:34 +0000 (0:00:00.204) 0:00:23.930 ********* 2026-03-17 00:46:38.643003 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-45fdc78c-b598-5156-b36d-ba4cd7c12386', 'data_vg': 'ceph-45fdc78c-b598-5156-b36d-ba4cd7c12386'})  2026-03-17 00:46:38.643010 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2b5d6da3-626f-5c09-a421-20ac1510e3d2', 'data_vg': 'ceph-2b5d6da3-626f-5c09-a421-20ac1510e3d2'})  2026-03-17 00:46:38.643016 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:38.643023 | orchestrator | 2026-03-17 00:46:38.643030 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-17 00:46:38.643036 | orchestrator | Tuesday 17 March 2026 00:46:34 +0000 (0:00:00.184) 0:00:24.114 ********* 2026-03-17 00:46:38.643043 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-45fdc78c-b598-5156-b36d-ba4cd7c12386', 'data_vg': 'ceph-45fdc78c-b598-5156-b36d-ba4cd7c12386'})  2026-03-17 00:46:38.643050 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2b5d6da3-626f-5c09-a421-20ac1510e3d2', 'data_vg': 'ceph-2b5d6da3-626f-5c09-a421-20ac1510e3d2'})  2026-03-17 00:46:38.643056 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:38.643063 | orchestrator | 2026-03-17 00:46:38.643069 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-17 00:46:38.643076 | orchestrator | Tuesday 17 March 2026 00:46:35 +0000 (0:00:00.395) 0:00:24.510 ********* 2026-03-17 00:46:38.643083 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-45fdc78c-b598-5156-b36d-ba4cd7c12386', 'data_vg': 'ceph-45fdc78c-b598-5156-b36d-ba4cd7c12386'})  2026-03-17 00:46:38.643089 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2b5d6da3-626f-5c09-a421-20ac1510e3d2', 'data_vg': 'ceph-2b5d6da3-626f-5c09-a421-20ac1510e3d2'})  2026-03-17 00:46:38.643170 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:38.643177 | orchestrator | 2026-03-17 00:46:38.643185 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-17 00:46:38.643193 | orchestrator | Tuesday 17 March 2026 00:46:35 +0000 (0:00:00.185) 0:00:24.696 ********* 2026-03-17 00:46:38.643216 | orchestrator | ok: [testbed-node-3] => { 2026-03-17 00:46:38.643347 | orchestrator |  "lvm_report": { 2026-03-17 00:46:38.643357 | orchestrator |  "lv": [ 2026-03-17 00:46:38.643365 | orchestrator |  { 2026-03-17 00:46:38.643373 | orchestrator |  "lv_name": "osd-block-2b5d6da3-626f-5c09-a421-20ac1510e3d2", 2026-03-17 00:46:38.643381 | orchestrator |  "vg_name": "ceph-2b5d6da3-626f-5c09-a421-20ac1510e3d2" 2026-03-17 00:46:38.643389 | orchestrator |  }, 2026-03-17 00:46:38.643405 | orchestrator |  { 2026-03-17 00:46:38.643412 | orchestrator |  "lv_name": "osd-block-45fdc78c-b598-5156-b36d-ba4cd7c12386", 2026-03-17 00:46:38.643419 | orchestrator |  "vg_name": "ceph-45fdc78c-b598-5156-b36d-ba4cd7c12386" 2026-03-17 00:46:38.643427 | orchestrator |  } 2026-03-17 00:46:38.643435 | orchestrator |  ], 2026-03-17 00:46:38.643442 | orchestrator |  "pv": [ 2026-03-17 00:46:38.643449 | orchestrator |  { 2026-03-17 00:46:38.643457 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-17 00:46:38.643464 | orchestrator |  "vg_name": "ceph-45fdc78c-b598-5156-b36d-ba4cd7c12386" 2026-03-17 00:46:38.643471 | orchestrator |  }, 2026-03-17 00:46:38.643477 | orchestrator |  { 2026-03-17 00:46:38.643484 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-17 00:46:38.643490 | orchestrator |  "vg_name": "ceph-2b5d6da3-626f-5c09-a421-20ac1510e3d2" 2026-03-17 00:46:38.643497 | orchestrator |  } 2026-03-17 00:46:38.643503 | orchestrator |  ] 2026-03-17 00:46:38.643510 | orchestrator |  } 2026-03-17 00:46:38.643517 | orchestrator | } 2026-03-17 00:46:38.643523 | orchestrator | 2026-03-17 00:46:38.643530 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-17 00:46:38.643536 | orchestrator | 2026-03-17 00:46:38.643543 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-17 00:46:38.643550 | orchestrator | Tuesday 17 March 2026 00:46:35 +0000 (0:00:00.295) 0:00:24.991 ********* 2026-03-17 00:46:38.643557 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-17 00:46:38.643563 | orchestrator | 2026-03-17 00:46:38.643570 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-17 00:46:38.643577 | orchestrator | Tuesday 17 March 2026 00:46:36 +0000 (0:00:00.294) 0:00:25.286 ********* 2026-03-17 00:46:38.643584 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:46:38.643590 | orchestrator | 2026-03-17 00:46:38.643597 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:46:38.643603 | orchestrator | Tuesday 17 March 2026 00:46:36 +0000 (0:00:00.278) 0:00:25.564 ********* 2026-03-17 00:46:38.643610 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-03-17 00:46:38.643616 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-03-17 00:46:38.643623 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-03-17 00:46:38.643629 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-03-17 00:46:38.643636 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-03-17 00:46:38.643645 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-03-17 00:46:38.643657 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-03-17 00:46:38.643668 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-03-17 00:46:38.643680 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-03-17 00:46:38.643702 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-03-17 00:46:38.643714 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-03-17 00:46:38.643759 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-03-17 00:46:38.643771 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-03-17 00:46:38.643847 | orchestrator | 2026-03-17 00:46:38.643859 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:46:38.643870 | orchestrator | Tuesday 17 March 2026 00:46:36 +0000 (0:00:00.435) 0:00:26.000 ********* 2026-03-17 00:46:38.643881 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:38.643902 | orchestrator | 2026-03-17 00:46:38.643914 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:46:38.643925 | orchestrator | Tuesday 17 March 2026 00:46:37 +0000 (0:00:00.207) 0:00:26.208 ********* 2026-03-17 00:46:38.643936 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:38.643947 | orchestrator | 2026-03-17 00:46:38.643959 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:46:38.643970 | orchestrator | Tuesday 17 March 2026 00:46:37 +0000 (0:00:00.260) 0:00:26.468 ********* 2026-03-17 00:46:38.643981 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:38.643992 | orchestrator | 2026-03-17 00:46:38.644002 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:46:38.644014 | orchestrator | Tuesday 17 March 2026 00:46:37 +0000 (0:00:00.223) 0:00:26.692 ********* 2026-03-17 00:46:38.644023 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:38.644030 | orchestrator | 2026-03-17 00:46:38.644037 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:46:38.644044 | orchestrator | Tuesday 17 March 2026 00:46:38 +0000 (0:00:00.694) 0:00:27.386 ********* 2026-03-17 00:46:38.644050 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:38.644057 | orchestrator | 2026-03-17 00:46:38.644063 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:46:38.644070 | orchestrator | Tuesday 17 March 2026 00:46:38 +0000 (0:00:00.206) 0:00:27.593 ********* 2026-03-17 00:46:38.644076 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:38.644083 | orchestrator | 2026-03-17 00:46:38.644099 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:46:49.013215 | orchestrator | Tuesday 17 March 2026 00:46:38 +0000 (0:00:00.225) 0:00:27.819 ********* 2026-03-17 00:46:49.013341 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:49.013350 | orchestrator | 2026-03-17 00:46:49.013356 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:46:49.013362 | orchestrator | Tuesday 17 March 2026 00:46:38 +0000 (0:00:00.242) 0:00:28.061 ********* 2026-03-17 00:46:49.013377 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:49.013415 | orchestrator | 2026-03-17 00:46:49.013422 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:46:49.013427 | orchestrator | Tuesday 17 March 2026 00:46:39 +0000 (0:00:00.200) 0:00:28.262 ********* 2026-03-17 00:46:49.013433 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_6e08813e-a36b-44d4-8c45-37d944b877b3) 2026-03-17 00:46:49.013483 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_6e08813e-a36b-44d4-8c45-37d944b877b3) 2026-03-17 00:46:49.013489 | orchestrator | 2026-03-17 00:46:49.013494 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:46:49.013499 | orchestrator | Tuesday 17 March 2026 00:46:39 +0000 (0:00:00.394) 0:00:28.656 ********* 2026-03-17 00:46:49.013504 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_83f9c1ee-a593-4773-9f19-cdbbc5179b15) 2026-03-17 00:46:49.013510 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_83f9c1ee-a593-4773-9f19-cdbbc5179b15) 2026-03-17 00:46:49.013515 | orchestrator | 2026-03-17 00:46:49.013533 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:46:49.013538 | orchestrator | Tuesday 17 March 2026 00:46:39 +0000 (0:00:00.421) 0:00:29.078 ********* 2026-03-17 00:46:49.013543 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_a8e3ed1c-2f99-41d3-ad10-61535a4cd08c) 2026-03-17 00:46:49.013548 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_a8e3ed1c-2f99-41d3-ad10-61535a4cd08c) 2026-03-17 00:46:49.013553 | orchestrator | 2026-03-17 00:46:49.013558 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:46:49.013562 | orchestrator | Tuesday 17 March 2026 00:46:40 +0000 (0:00:00.407) 0:00:29.485 ********* 2026-03-17 00:46:49.013567 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_89f9da0d-6b93-4417-9f39-e48f14dc47e8) 2026-03-17 00:46:49.013588 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_89f9da0d-6b93-4417-9f39-e48f14dc47e8) 2026-03-17 00:46:49.013593 | orchestrator | 2026-03-17 00:46:49.013598 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:46:49.013603 | orchestrator | Tuesday 17 March 2026 00:46:40 +0000 (0:00:00.422) 0:00:29.907 ********* 2026-03-17 00:46:49.013608 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-17 00:46:49.013613 | orchestrator | 2026-03-17 00:46:49.013617 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:46:49.013622 | orchestrator | Tuesday 17 March 2026 00:46:41 +0000 (0:00:00.326) 0:00:30.234 ********* 2026-03-17 00:46:49.013627 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-03-17 00:46:49.013632 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-03-17 00:46:49.013637 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-03-17 00:46:49.013642 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-03-17 00:46:49.013647 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-03-17 00:46:49.013652 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-03-17 00:46:49.013656 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-03-17 00:46:49.013661 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-03-17 00:46:49.013666 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-03-17 00:46:49.013671 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-03-17 00:46:49.013676 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-03-17 00:46:49.013680 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-03-17 00:46:49.013685 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-03-17 00:46:49.013690 | orchestrator | 2026-03-17 00:46:49.013695 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:46:49.013699 | orchestrator | Tuesday 17 March 2026 00:46:41 +0000 (0:00:00.605) 0:00:30.839 ********* 2026-03-17 00:46:49.013704 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:49.013709 | orchestrator | 2026-03-17 00:46:49.013714 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:46:49.013719 | orchestrator | Tuesday 17 March 2026 00:46:41 +0000 (0:00:00.201) 0:00:31.041 ********* 2026-03-17 00:46:49.013723 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:49.013728 | orchestrator | 2026-03-17 00:46:49.013733 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:46:49.013738 | orchestrator | Tuesday 17 March 2026 00:46:42 +0000 (0:00:00.212) 0:00:31.253 ********* 2026-03-17 00:46:49.013743 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:49.013748 | orchestrator | 2026-03-17 00:46:49.013766 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:46:49.013772 | orchestrator | Tuesday 17 March 2026 00:46:42 +0000 (0:00:00.211) 0:00:31.465 ********* 2026-03-17 00:46:49.013778 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:49.013784 | orchestrator | 2026-03-17 00:46:49.013789 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:46:49.013795 | orchestrator | Tuesday 17 March 2026 00:46:42 +0000 (0:00:00.187) 0:00:31.653 ********* 2026-03-17 00:46:49.013801 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:49.013819 | orchestrator | 2026-03-17 00:46:49.013836 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:46:49.013850 | orchestrator | Tuesday 17 March 2026 00:46:42 +0000 (0:00:00.207) 0:00:31.860 ********* 2026-03-17 00:46:49.013858 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:49.013866 | orchestrator | 2026-03-17 00:46:49.013873 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:46:49.013882 | orchestrator | Tuesday 17 March 2026 00:46:42 +0000 (0:00:00.211) 0:00:32.071 ********* 2026-03-17 00:46:49.013900 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:49.013906 | orchestrator | 2026-03-17 00:46:49.013910 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:46:49.013915 | orchestrator | Tuesday 17 March 2026 00:46:43 +0000 (0:00:00.220) 0:00:32.292 ********* 2026-03-17 00:46:49.013920 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:49.013925 | orchestrator | 2026-03-17 00:46:49.013929 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:46:49.013938 | orchestrator | Tuesday 17 March 2026 00:46:43 +0000 (0:00:00.218) 0:00:32.511 ********* 2026-03-17 00:46:49.013943 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-03-17 00:46:49.013949 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-03-17 00:46:49.013954 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-03-17 00:46:49.013959 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-03-17 00:46:49.013964 | orchestrator | 2026-03-17 00:46:49.013968 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:46:49.013973 | orchestrator | Tuesday 17 March 2026 00:46:44 +0000 (0:00:00.867) 0:00:33.379 ********* 2026-03-17 00:46:49.013978 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:49.013982 | orchestrator | 2026-03-17 00:46:49.013987 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:46:49.013992 | orchestrator | Tuesday 17 March 2026 00:46:44 +0000 (0:00:00.227) 0:00:33.606 ********* 2026-03-17 00:46:49.013997 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:49.014001 | orchestrator | 2026-03-17 00:46:49.014006 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:46:49.014011 | orchestrator | Tuesday 17 March 2026 00:46:44 +0000 (0:00:00.195) 0:00:33.801 ********* 2026-03-17 00:46:49.014049 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:49.014055 | orchestrator | 2026-03-17 00:46:49.014060 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:46:49.014064 | orchestrator | Tuesday 17 March 2026 00:46:45 +0000 (0:00:00.684) 0:00:34.486 ********* 2026-03-17 00:46:49.014069 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:49.014074 | orchestrator | 2026-03-17 00:46:49.014078 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-17 00:46:49.014083 | orchestrator | Tuesday 17 March 2026 00:46:45 +0000 (0:00:00.209) 0:00:34.695 ********* 2026-03-17 00:46:49.014088 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:49.014093 | orchestrator | 2026-03-17 00:46:49.014098 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-17 00:46:49.014102 | orchestrator | Tuesday 17 March 2026 00:46:45 +0000 (0:00:00.141) 0:00:34.836 ********* 2026-03-17 00:46:49.014107 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'dc88f193-a403-571c-9716-867079cb0a77'}}) 2026-03-17 00:46:49.014113 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9e905ad0-9805-5328-aec5-92944dddbd57'}}) 2026-03-17 00:46:49.014118 | orchestrator | 2026-03-17 00:46:49.014122 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-17 00:46:49.014127 | orchestrator | Tuesday 17 March 2026 00:46:45 +0000 (0:00:00.187) 0:00:35.023 ********* 2026-03-17 00:46:49.014133 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-dc88f193-a403-571c-9716-867079cb0a77', 'data_vg': 'ceph-dc88f193-a403-571c-9716-867079cb0a77'}) 2026-03-17 00:46:49.014139 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-9e905ad0-9805-5328-aec5-92944dddbd57', 'data_vg': 'ceph-9e905ad0-9805-5328-aec5-92944dddbd57'}) 2026-03-17 00:46:49.014149 | orchestrator | 2026-03-17 00:46:49.014154 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-17 00:46:49.014159 | orchestrator | Tuesday 17 March 2026 00:46:47 +0000 (0:00:01.802) 0:00:36.825 ********* 2026-03-17 00:46:49.014164 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-dc88f193-a403-571c-9716-867079cb0a77', 'data_vg': 'ceph-dc88f193-a403-571c-9716-867079cb0a77'})  2026-03-17 00:46:49.014171 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9e905ad0-9805-5328-aec5-92944dddbd57', 'data_vg': 'ceph-9e905ad0-9805-5328-aec5-92944dddbd57'})  2026-03-17 00:46:49.014175 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:49.014180 | orchestrator | 2026-03-17 00:46:49.014185 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-17 00:46:49.014190 | orchestrator | Tuesday 17 March 2026 00:46:47 +0000 (0:00:00.147) 0:00:36.973 ********* 2026-03-17 00:46:49.014195 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-dc88f193-a403-571c-9716-867079cb0a77', 'data_vg': 'ceph-dc88f193-a403-571c-9716-867079cb0a77'}) 2026-03-17 00:46:49.014205 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-9e905ad0-9805-5328-aec5-92944dddbd57', 'data_vg': 'ceph-9e905ad0-9805-5328-aec5-92944dddbd57'}) 2026-03-17 00:46:54.259896 | orchestrator | 2026-03-17 00:46:54.259963 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-17 00:46:54.259972 | orchestrator | Tuesday 17 March 2026 00:46:49 +0000 (0:00:01.303) 0:00:38.276 ********* 2026-03-17 00:46:54.259978 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-dc88f193-a403-571c-9716-867079cb0a77', 'data_vg': 'ceph-dc88f193-a403-571c-9716-867079cb0a77'})  2026-03-17 00:46:54.259984 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9e905ad0-9805-5328-aec5-92944dddbd57', 'data_vg': 'ceph-9e905ad0-9805-5328-aec5-92944dddbd57'})  2026-03-17 00:46:54.259989 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:54.259994 | orchestrator | 2026-03-17 00:46:54.259999 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-17 00:46:54.260004 | orchestrator | Tuesday 17 March 2026 00:46:49 +0000 (0:00:00.132) 0:00:38.409 ********* 2026-03-17 00:46:54.260008 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:54.260013 | orchestrator | 2026-03-17 00:46:54.260018 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-17 00:46:54.260022 | orchestrator | Tuesday 17 March 2026 00:46:49 +0000 (0:00:00.114) 0:00:38.524 ********* 2026-03-17 00:46:54.260027 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-dc88f193-a403-571c-9716-867079cb0a77', 'data_vg': 'ceph-dc88f193-a403-571c-9716-867079cb0a77'})  2026-03-17 00:46:54.260032 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9e905ad0-9805-5328-aec5-92944dddbd57', 'data_vg': 'ceph-9e905ad0-9805-5328-aec5-92944dddbd57'})  2026-03-17 00:46:54.260037 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:54.260041 | orchestrator | 2026-03-17 00:46:54.260046 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-17 00:46:54.260051 | orchestrator | Tuesday 17 March 2026 00:46:49 +0000 (0:00:00.135) 0:00:38.659 ********* 2026-03-17 00:46:54.260055 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:54.260060 | orchestrator | 2026-03-17 00:46:54.260065 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-17 00:46:54.260069 | orchestrator | Tuesday 17 March 2026 00:46:49 +0000 (0:00:00.125) 0:00:38.784 ********* 2026-03-17 00:46:54.260074 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-dc88f193-a403-571c-9716-867079cb0a77', 'data_vg': 'ceph-dc88f193-a403-571c-9716-867079cb0a77'})  2026-03-17 00:46:54.260079 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9e905ad0-9805-5328-aec5-92944dddbd57', 'data_vg': 'ceph-9e905ad0-9805-5328-aec5-92944dddbd57'})  2026-03-17 00:46:54.260097 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:54.260102 | orchestrator | 2026-03-17 00:46:54.260107 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-17 00:46:54.260111 | orchestrator | Tuesday 17 March 2026 00:46:49 +0000 (0:00:00.133) 0:00:38.918 ********* 2026-03-17 00:46:54.260116 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:54.260121 | orchestrator | 2026-03-17 00:46:54.260135 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-17 00:46:54.260140 | orchestrator | Tuesday 17 March 2026 00:46:49 +0000 (0:00:00.222) 0:00:39.141 ********* 2026-03-17 00:46:54.260145 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-dc88f193-a403-571c-9716-867079cb0a77', 'data_vg': 'ceph-dc88f193-a403-571c-9716-867079cb0a77'})  2026-03-17 00:46:54.260149 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9e905ad0-9805-5328-aec5-92944dddbd57', 'data_vg': 'ceph-9e905ad0-9805-5328-aec5-92944dddbd57'})  2026-03-17 00:46:54.260154 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:54.260158 | orchestrator | 2026-03-17 00:46:54.260163 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-17 00:46:54.260167 | orchestrator | Tuesday 17 March 2026 00:46:50 +0000 (0:00:00.132) 0:00:39.273 ********* 2026-03-17 00:46:54.260172 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:46:54.260177 | orchestrator | 2026-03-17 00:46:54.260181 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-17 00:46:54.260186 | orchestrator | Tuesday 17 March 2026 00:46:50 +0000 (0:00:00.124) 0:00:39.398 ********* 2026-03-17 00:46:54.260191 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-dc88f193-a403-571c-9716-867079cb0a77', 'data_vg': 'ceph-dc88f193-a403-571c-9716-867079cb0a77'})  2026-03-17 00:46:54.260195 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9e905ad0-9805-5328-aec5-92944dddbd57', 'data_vg': 'ceph-9e905ad0-9805-5328-aec5-92944dddbd57'})  2026-03-17 00:46:54.260200 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:54.260204 | orchestrator | 2026-03-17 00:46:54.260209 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-17 00:46:54.260213 | orchestrator | Tuesday 17 March 2026 00:46:50 +0000 (0:00:00.136) 0:00:39.535 ********* 2026-03-17 00:46:54.260218 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-dc88f193-a403-571c-9716-867079cb0a77', 'data_vg': 'ceph-dc88f193-a403-571c-9716-867079cb0a77'})  2026-03-17 00:46:54.260223 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9e905ad0-9805-5328-aec5-92944dddbd57', 'data_vg': 'ceph-9e905ad0-9805-5328-aec5-92944dddbd57'})  2026-03-17 00:46:54.260227 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:54.260232 | orchestrator | 2026-03-17 00:46:54.260236 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-17 00:46:54.260250 | orchestrator | Tuesday 17 March 2026 00:46:50 +0000 (0:00:00.133) 0:00:39.668 ********* 2026-03-17 00:46:54.260255 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-dc88f193-a403-571c-9716-867079cb0a77', 'data_vg': 'ceph-dc88f193-a403-571c-9716-867079cb0a77'})  2026-03-17 00:46:54.260260 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9e905ad0-9805-5328-aec5-92944dddbd57', 'data_vg': 'ceph-9e905ad0-9805-5328-aec5-92944dddbd57'})  2026-03-17 00:46:54.260265 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:54.260269 | orchestrator | 2026-03-17 00:46:54.260305 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-17 00:46:54.260311 | orchestrator | Tuesday 17 March 2026 00:46:50 +0000 (0:00:00.129) 0:00:39.798 ********* 2026-03-17 00:46:54.260318 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:54.260326 | orchestrator | 2026-03-17 00:46:54.260335 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-17 00:46:54.260342 | orchestrator | Tuesday 17 March 2026 00:46:50 +0000 (0:00:00.129) 0:00:39.927 ********* 2026-03-17 00:46:54.260365 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:54.260373 | orchestrator | 2026-03-17 00:46:54.260381 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-17 00:46:54.260393 | orchestrator | Tuesday 17 March 2026 00:46:50 +0000 (0:00:00.117) 0:00:40.045 ********* 2026-03-17 00:46:54.260401 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:54.260406 | orchestrator | 2026-03-17 00:46:54.260410 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-17 00:46:54.260415 | orchestrator | Tuesday 17 March 2026 00:46:50 +0000 (0:00:00.118) 0:00:40.164 ********* 2026-03-17 00:46:54.260419 | orchestrator | ok: [testbed-node-4] => { 2026-03-17 00:46:54.260424 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-17 00:46:54.260429 | orchestrator | } 2026-03-17 00:46:54.260433 | orchestrator | 2026-03-17 00:46:54.260438 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-17 00:46:54.260442 | orchestrator | Tuesday 17 March 2026 00:46:51 +0000 (0:00:00.121) 0:00:40.285 ********* 2026-03-17 00:46:54.260447 | orchestrator | ok: [testbed-node-4] => { 2026-03-17 00:46:54.260451 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-17 00:46:54.260456 | orchestrator | } 2026-03-17 00:46:54.260460 | orchestrator | 2026-03-17 00:46:54.260465 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-17 00:46:54.260470 | orchestrator | Tuesday 17 March 2026 00:46:51 +0000 (0:00:00.128) 0:00:40.413 ********* 2026-03-17 00:46:54.260474 | orchestrator | ok: [testbed-node-4] => { 2026-03-17 00:46:54.260479 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-17 00:46:54.260484 | orchestrator | } 2026-03-17 00:46:54.260490 | orchestrator | 2026-03-17 00:46:54.260495 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-17 00:46:54.260500 | orchestrator | Tuesday 17 March 2026 00:46:51 +0000 (0:00:00.109) 0:00:40.522 ********* 2026-03-17 00:46:54.260505 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:46:54.260510 | orchestrator | 2026-03-17 00:46:54.260515 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-17 00:46:54.260521 | orchestrator | Tuesday 17 March 2026 00:46:52 +0000 (0:00:00.660) 0:00:41.183 ********* 2026-03-17 00:46:54.260526 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:46:54.260531 | orchestrator | 2026-03-17 00:46:54.260536 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-17 00:46:54.260541 | orchestrator | Tuesday 17 March 2026 00:46:52 +0000 (0:00:00.599) 0:00:41.783 ********* 2026-03-17 00:46:54.260546 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:46:54.260552 | orchestrator | 2026-03-17 00:46:54.260557 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-17 00:46:54.260562 | orchestrator | Tuesday 17 March 2026 00:46:53 +0000 (0:00:00.570) 0:00:42.353 ********* 2026-03-17 00:46:54.260567 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:46:54.260572 | orchestrator | 2026-03-17 00:46:54.260578 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-17 00:46:54.260583 | orchestrator | Tuesday 17 March 2026 00:46:53 +0000 (0:00:00.151) 0:00:42.505 ********* 2026-03-17 00:46:54.260588 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:54.260593 | orchestrator | 2026-03-17 00:46:54.260598 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-17 00:46:54.260603 | orchestrator | Tuesday 17 March 2026 00:46:53 +0000 (0:00:00.115) 0:00:42.621 ********* 2026-03-17 00:46:54.260608 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:54.260613 | orchestrator | 2026-03-17 00:46:54.260619 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-17 00:46:54.260624 | orchestrator | Tuesday 17 March 2026 00:46:53 +0000 (0:00:00.111) 0:00:42.732 ********* 2026-03-17 00:46:54.260629 | orchestrator | ok: [testbed-node-4] => { 2026-03-17 00:46:54.260635 | orchestrator |  "vgs_report": { 2026-03-17 00:46:54.260641 | orchestrator |  "vg": [] 2026-03-17 00:46:54.260646 | orchestrator |  } 2026-03-17 00:46:54.260651 | orchestrator | } 2026-03-17 00:46:54.260660 | orchestrator | 2026-03-17 00:46:54.260665 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-17 00:46:54.260670 | orchestrator | Tuesday 17 March 2026 00:46:53 +0000 (0:00:00.141) 0:00:42.874 ********* 2026-03-17 00:46:54.260675 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:54.260681 | orchestrator | 2026-03-17 00:46:54.260686 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-17 00:46:54.260691 | orchestrator | Tuesday 17 March 2026 00:46:53 +0000 (0:00:00.139) 0:00:43.014 ********* 2026-03-17 00:46:54.260696 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:54.260701 | orchestrator | 2026-03-17 00:46:54.260706 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-17 00:46:54.260711 | orchestrator | Tuesday 17 March 2026 00:46:53 +0000 (0:00:00.137) 0:00:43.151 ********* 2026-03-17 00:46:54.260716 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:54.260722 | orchestrator | 2026-03-17 00:46:54.260727 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-17 00:46:54.260733 | orchestrator | Tuesday 17 March 2026 00:46:54 +0000 (0:00:00.146) 0:00:43.298 ********* 2026-03-17 00:46:54.260738 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:54.260743 | orchestrator | 2026-03-17 00:46:54.260752 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-17 00:46:59.235284 | orchestrator | Tuesday 17 March 2026 00:46:54 +0000 (0:00:00.136) 0:00:43.434 ********* 2026-03-17 00:46:59.235418 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:59.235434 | orchestrator | 2026-03-17 00:46:59.235445 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-17 00:46:59.235456 | orchestrator | Tuesday 17 March 2026 00:46:54 +0000 (0:00:00.136) 0:00:43.570 ********* 2026-03-17 00:46:59.235466 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:59.235476 | orchestrator | 2026-03-17 00:46:59.235487 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-17 00:46:59.235498 | orchestrator | Tuesday 17 March 2026 00:46:54 +0000 (0:00:00.353) 0:00:43.923 ********* 2026-03-17 00:46:59.235508 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:59.235519 | orchestrator | 2026-03-17 00:46:59.235530 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-17 00:46:59.235541 | orchestrator | Tuesday 17 March 2026 00:46:54 +0000 (0:00:00.135) 0:00:44.058 ********* 2026-03-17 00:46:59.235552 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:59.235563 | orchestrator | 2026-03-17 00:46:59.235573 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-17 00:46:59.235584 | orchestrator | Tuesday 17 March 2026 00:46:55 +0000 (0:00:00.125) 0:00:44.183 ********* 2026-03-17 00:46:59.235613 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:59.235625 | orchestrator | 2026-03-17 00:46:59.235636 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-17 00:46:59.235647 | orchestrator | Tuesday 17 March 2026 00:46:55 +0000 (0:00:00.150) 0:00:44.334 ********* 2026-03-17 00:46:59.235658 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:59.235668 | orchestrator | 2026-03-17 00:46:59.235679 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-17 00:46:59.235690 | orchestrator | Tuesday 17 March 2026 00:46:55 +0000 (0:00:00.135) 0:00:44.470 ********* 2026-03-17 00:46:59.235701 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:59.235712 | orchestrator | 2026-03-17 00:46:59.235723 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-17 00:46:59.235734 | orchestrator | Tuesday 17 March 2026 00:46:55 +0000 (0:00:00.124) 0:00:44.595 ********* 2026-03-17 00:46:59.235745 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:59.235756 | orchestrator | 2026-03-17 00:46:59.235767 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-17 00:46:59.235778 | orchestrator | Tuesday 17 March 2026 00:46:55 +0000 (0:00:00.141) 0:00:44.737 ********* 2026-03-17 00:46:59.235788 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:59.235823 | orchestrator | 2026-03-17 00:46:59.235836 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-17 00:46:59.235849 | orchestrator | Tuesday 17 March 2026 00:46:55 +0000 (0:00:00.137) 0:00:44.874 ********* 2026-03-17 00:46:59.235861 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:59.235874 | orchestrator | 2026-03-17 00:46:59.235887 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-17 00:46:59.235899 | orchestrator | Tuesday 17 March 2026 00:46:55 +0000 (0:00:00.126) 0:00:45.001 ********* 2026-03-17 00:46:59.235913 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-dc88f193-a403-571c-9716-867079cb0a77', 'data_vg': 'ceph-dc88f193-a403-571c-9716-867079cb0a77'})  2026-03-17 00:46:59.235927 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9e905ad0-9805-5328-aec5-92944dddbd57', 'data_vg': 'ceph-9e905ad0-9805-5328-aec5-92944dddbd57'})  2026-03-17 00:46:59.235940 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:59.235952 | orchestrator | 2026-03-17 00:46:59.235964 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-17 00:46:59.235975 | orchestrator | Tuesday 17 March 2026 00:46:55 +0000 (0:00:00.162) 0:00:45.163 ********* 2026-03-17 00:46:59.235986 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-dc88f193-a403-571c-9716-867079cb0a77', 'data_vg': 'ceph-dc88f193-a403-571c-9716-867079cb0a77'})  2026-03-17 00:46:59.235997 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9e905ad0-9805-5328-aec5-92944dddbd57', 'data_vg': 'ceph-9e905ad0-9805-5328-aec5-92944dddbd57'})  2026-03-17 00:46:59.236008 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:59.236019 | orchestrator | 2026-03-17 00:46:59.236030 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-17 00:46:59.236040 | orchestrator | Tuesday 17 March 2026 00:46:56 +0000 (0:00:00.174) 0:00:45.337 ********* 2026-03-17 00:46:59.236051 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-dc88f193-a403-571c-9716-867079cb0a77', 'data_vg': 'ceph-dc88f193-a403-571c-9716-867079cb0a77'})  2026-03-17 00:46:59.236062 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9e905ad0-9805-5328-aec5-92944dddbd57', 'data_vg': 'ceph-9e905ad0-9805-5328-aec5-92944dddbd57'})  2026-03-17 00:46:59.236073 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:59.236084 | orchestrator | 2026-03-17 00:46:59.236095 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-17 00:46:59.236106 | orchestrator | Tuesday 17 March 2026 00:46:56 +0000 (0:00:00.172) 0:00:45.510 ********* 2026-03-17 00:46:59.236116 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-dc88f193-a403-571c-9716-867079cb0a77', 'data_vg': 'ceph-dc88f193-a403-571c-9716-867079cb0a77'})  2026-03-17 00:46:59.236129 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9e905ad0-9805-5328-aec5-92944dddbd57', 'data_vg': 'ceph-9e905ad0-9805-5328-aec5-92944dddbd57'})  2026-03-17 00:46:59.236140 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:59.236151 | orchestrator | 2026-03-17 00:46:59.236180 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-17 00:46:59.236192 | orchestrator | Tuesday 17 March 2026 00:46:56 +0000 (0:00:00.579) 0:00:46.089 ********* 2026-03-17 00:46:59.236203 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-dc88f193-a403-571c-9716-867079cb0a77', 'data_vg': 'ceph-dc88f193-a403-571c-9716-867079cb0a77'})  2026-03-17 00:46:59.236214 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9e905ad0-9805-5328-aec5-92944dddbd57', 'data_vg': 'ceph-9e905ad0-9805-5328-aec5-92944dddbd57'})  2026-03-17 00:46:59.236225 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:59.236236 | orchestrator | 2026-03-17 00:46:59.236247 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-17 00:46:59.236257 | orchestrator | Tuesday 17 March 2026 00:46:57 +0000 (0:00:00.200) 0:00:46.290 ********* 2026-03-17 00:46:59.236276 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-dc88f193-a403-571c-9716-867079cb0a77', 'data_vg': 'ceph-dc88f193-a403-571c-9716-867079cb0a77'})  2026-03-17 00:46:59.236287 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9e905ad0-9805-5328-aec5-92944dddbd57', 'data_vg': 'ceph-9e905ad0-9805-5328-aec5-92944dddbd57'})  2026-03-17 00:46:59.236326 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:59.236337 | orchestrator | 2026-03-17 00:46:59.236348 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-17 00:46:59.236359 | orchestrator | Tuesday 17 March 2026 00:46:57 +0000 (0:00:00.163) 0:00:46.453 ********* 2026-03-17 00:46:59.236370 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-dc88f193-a403-571c-9716-867079cb0a77', 'data_vg': 'ceph-dc88f193-a403-571c-9716-867079cb0a77'})  2026-03-17 00:46:59.236381 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9e905ad0-9805-5328-aec5-92944dddbd57', 'data_vg': 'ceph-9e905ad0-9805-5328-aec5-92944dddbd57'})  2026-03-17 00:46:59.236392 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:59.236402 | orchestrator | 2026-03-17 00:46:59.236413 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-17 00:46:59.236424 | orchestrator | Tuesday 17 March 2026 00:46:57 +0000 (0:00:00.156) 0:00:46.610 ********* 2026-03-17 00:46:59.236434 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-dc88f193-a403-571c-9716-867079cb0a77', 'data_vg': 'ceph-dc88f193-a403-571c-9716-867079cb0a77'})  2026-03-17 00:46:59.236453 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9e905ad0-9805-5328-aec5-92944dddbd57', 'data_vg': 'ceph-9e905ad0-9805-5328-aec5-92944dddbd57'})  2026-03-17 00:46:59.236471 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:59.236488 | orchestrator | 2026-03-17 00:46:59.236506 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-17 00:46:59.236525 | orchestrator | Tuesday 17 March 2026 00:46:57 +0000 (0:00:00.155) 0:00:46.765 ********* 2026-03-17 00:46:59.236542 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:46:59.236560 | orchestrator | 2026-03-17 00:46:59.236578 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-17 00:46:59.236597 | orchestrator | Tuesday 17 March 2026 00:46:58 +0000 (0:00:00.530) 0:00:47.296 ********* 2026-03-17 00:46:59.236616 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:46:59.236633 | orchestrator | 2026-03-17 00:46:59.236652 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-17 00:46:59.236664 | orchestrator | Tuesday 17 March 2026 00:46:58 +0000 (0:00:00.545) 0:00:47.842 ********* 2026-03-17 00:46:59.236674 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:46:59.236685 | orchestrator | 2026-03-17 00:46:59.236695 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-17 00:46:59.236706 | orchestrator | Tuesday 17 March 2026 00:46:58 +0000 (0:00:00.161) 0:00:48.004 ********* 2026-03-17 00:46:59.236717 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-9e905ad0-9805-5328-aec5-92944dddbd57', 'vg_name': 'ceph-9e905ad0-9805-5328-aec5-92944dddbd57'}) 2026-03-17 00:46:59.236729 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-dc88f193-a403-571c-9716-867079cb0a77', 'vg_name': 'ceph-dc88f193-a403-571c-9716-867079cb0a77'}) 2026-03-17 00:46:59.236740 | orchestrator | 2026-03-17 00:46:59.236751 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-17 00:46:59.236761 | orchestrator | Tuesday 17 March 2026 00:46:58 +0000 (0:00:00.162) 0:00:48.166 ********* 2026-03-17 00:46:59.236772 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-dc88f193-a403-571c-9716-867079cb0a77', 'data_vg': 'ceph-dc88f193-a403-571c-9716-867079cb0a77'})  2026-03-17 00:46:59.236824 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9e905ad0-9805-5328-aec5-92944dddbd57', 'data_vg': 'ceph-9e905ad0-9805-5328-aec5-92944dddbd57'})  2026-03-17 00:46:59.236837 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:59.236857 | orchestrator | 2026-03-17 00:46:59.236868 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-17 00:46:59.236879 | orchestrator | Tuesday 17 March 2026 00:46:59 +0000 (0:00:00.157) 0:00:48.324 ********* 2026-03-17 00:46:59.236889 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-dc88f193-a403-571c-9716-867079cb0a77', 'data_vg': 'ceph-dc88f193-a403-571c-9716-867079cb0a77'})  2026-03-17 00:46:59.236910 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9e905ad0-9805-5328-aec5-92944dddbd57', 'data_vg': 'ceph-9e905ad0-9805-5328-aec5-92944dddbd57'})  2026-03-17 00:47:05.348472 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:47:05.348569 | orchestrator | 2026-03-17 00:47:05.348583 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-17 00:47:05.348593 | orchestrator | Tuesday 17 March 2026 00:46:59 +0000 (0:00:00.174) 0:00:48.498 ********* 2026-03-17 00:47:05.348603 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-dc88f193-a403-571c-9716-867079cb0a77', 'data_vg': 'ceph-dc88f193-a403-571c-9716-867079cb0a77'})  2026-03-17 00:47:05.348614 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9e905ad0-9805-5328-aec5-92944dddbd57', 'data_vg': 'ceph-9e905ad0-9805-5328-aec5-92944dddbd57'})  2026-03-17 00:47:05.348623 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:47:05.348632 | orchestrator | 2026-03-17 00:47:05.348641 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-17 00:47:05.348649 | orchestrator | Tuesday 17 March 2026 00:46:59 +0000 (0:00:00.171) 0:00:48.670 ********* 2026-03-17 00:47:05.348658 | orchestrator | ok: [testbed-node-4] => { 2026-03-17 00:47:05.348667 | orchestrator |  "lvm_report": { 2026-03-17 00:47:05.348677 | orchestrator |  "lv": [ 2026-03-17 00:47:05.348701 | orchestrator |  { 2026-03-17 00:47:05.348710 | orchestrator |  "lv_name": "osd-block-9e905ad0-9805-5328-aec5-92944dddbd57", 2026-03-17 00:47:05.348720 | orchestrator |  "vg_name": "ceph-9e905ad0-9805-5328-aec5-92944dddbd57" 2026-03-17 00:47:05.348729 | orchestrator |  }, 2026-03-17 00:47:05.348738 | orchestrator |  { 2026-03-17 00:47:05.348747 | orchestrator |  "lv_name": "osd-block-dc88f193-a403-571c-9716-867079cb0a77", 2026-03-17 00:47:05.348755 | orchestrator |  "vg_name": "ceph-dc88f193-a403-571c-9716-867079cb0a77" 2026-03-17 00:47:05.348764 | orchestrator |  } 2026-03-17 00:47:05.348773 | orchestrator |  ], 2026-03-17 00:47:05.348782 | orchestrator |  "pv": [ 2026-03-17 00:47:05.348790 | orchestrator |  { 2026-03-17 00:47:05.348799 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-17 00:47:05.348808 | orchestrator |  "vg_name": "ceph-dc88f193-a403-571c-9716-867079cb0a77" 2026-03-17 00:47:05.348817 | orchestrator |  }, 2026-03-17 00:47:05.348825 | orchestrator |  { 2026-03-17 00:47:05.348834 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-17 00:47:05.348842 | orchestrator |  "vg_name": "ceph-9e905ad0-9805-5328-aec5-92944dddbd57" 2026-03-17 00:47:05.348852 | orchestrator |  } 2026-03-17 00:47:05.348861 | orchestrator |  ] 2026-03-17 00:47:05.348870 | orchestrator |  } 2026-03-17 00:47:05.348879 | orchestrator | } 2026-03-17 00:47:05.348888 | orchestrator | 2026-03-17 00:47:05.348896 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-17 00:47:05.348905 | orchestrator | 2026-03-17 00:47:05.348914 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-17 00:47:05.348922 | orchestrator | Tuesday 17 March 2026 00:47:00 +0000 (0:00:00.514) 0:00:49.184 ********* 2026-03-17 00:47:05.348931 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-17 00:47:05.348940 | orchestrator | 2026-03-17 00:47:05.348949 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-17 00:47:05.348957 | orchestrator | Tuesday 17 March 2026 00:47:00 +0000 (0:00:00.246) 0:00:49.431 ********* 2026-03-17 00:47:05.348985 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:47:05.348994 | orchestrator | 2026-03-17 00:47:05.349003 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:47:05.349012 | orchestrator | Tuesday 17 March 2026 00:47:00 +0000 (0:00:00.229) 0:00:49.661 ********* 2026-03-17 00:47:05.349022 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-03-17 00:47:05.349033 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-03-17 00:47:05.349043 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-03-17 00:47:05.349056 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-03-17 00:47:05.349067 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-03-17 00:47:05.349076 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-03-17 00:47:05.349087 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-03-17 00:47:05.349099 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-03-17 00:47:05.349114 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-03-17 00:47:05.349137 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-03-17 00:47:05.349154 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-03-17 00:47:05.349169 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-03-17 00:47:05.349184 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-03-17 00:47:05.349198 | orchestrator | 2026-03-17 00:47:05.349212 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:47:05.349228 | orchestrator | Tuesday 17 March 2026 00:47:00 +0000 (0:00:00.431) 0:00:50.092 ********* 2026-03-17 00:47:05.349243 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:05.349257 | orchestrator | 2026-03-17 00:47:05.349273 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:47:05.349288 | orchestrator | Tuesday 17 March 2026 00:47:01 +0000 (0:00:00.206) 0:00:50.299 ********* 2026-03-17 00:47:05.349304 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:05.349344 | orchestrator | 2026-03-17 00:47:05.349355 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:47:05.349382 | orchestrator | Tuesday 17 March 2026 00:47:01 +0000 (0:00:00.213) 0:00:50.512 ********* 2026-03-17 00:47:05.349392 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:05.349401 | orchestrator | 2026-03-17 00:47:05.349409 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:47:05.349418 | orchestrator | Tuesday 17 March 2026 00:47:01 +0000 (0:00:00.199) 0:00:50.711 ********* 2026-03-17 00:47:05.349427 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:05.349435 | orchestrator | 2026-03-17 00:47:05.349444 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:47:05.349453 | orchestrator | Tuesday 17 March 2026 00:47:01 +0000 (0:00:00.195) 0:00:50.907 ********* 2026-03-17 00:47:05.349461 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:05.349470 | orchestrator | 2026-03-17 00:47:05.349478 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:47:05.349487 | orchestrator | Tuesday 17 March 2026 00:47:01 +0000 (0:00:00.208) 0:00:51.116 ********* 2026-03-17 00:47:05.349496 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:05.349505 | orchestrator | 2026-03-17 00:47:05.349513 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:47:05.349529 | orchestrator | Tuesday 17 March 2026 00:47:02 +0000 (0:00:00.598) 0:00:51.715 ********* 2026-03-17 00:47:05.349538 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:05.349556 | orchestrator | 2026-03-17 00:47:05.349565 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:47:05.349573 | orchestrator | Tuesday 17 March 2026 00:47:02 +0000 (0:00:00.203) 0:00:51.918 ********* 2026-03-17 00:47:05.349582 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:05.349590 | orchestrator | 2026-03-17 00:47:05.349599 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:47:05.349608 | orchestrator | Tuesday 17 March 2026 00:47:02 +0000 (0:00:00.205) 0:00:52.123 ********* 2026-03-17 00:47:05.349616 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_dfd38aa9-0273-4d0b-842d-83e2b920901d) 2026-03-17 00:47:05.349626 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_dfd38aa9-0273-4d0b-842d-83e2b920901d) 2026-03-17 00:47:05.349635 | orchestrator | 2026-03-17 00:47:05.349643 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:47:05.349652 | orchestrator | Tuesday 17 March 2026 00:47:03 +0000 (0:00:00.448) 0:00:52.571 ********* 2026-03-17 00:47:05.349661 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_304f2e06-033e-4696-8bcf-5d7e9425b0ee) 2026-03-17 00:47:05.349669 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_304f2e06-033e-4696-8bcf-5d7e9425b0ee) 2026-03-17 00:47:05.349678 | orchestrator | 2026-03-17 00:47:05.349687 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:47:05.349695 | orchestrator | Tuesday 17 March 2026 00:47:03 +0000 (0:00:00.402) 0:00:52.974 ********* 2026-03-17 00:47:05.349704 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_d33e80f7-c5e3-468e-989c-76b1c28adee9) 2026-03-17 00:47:05.349712 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_d33e80f7-c5e3-468e-989c-76b1c28adee9) 2026-03-17 00:47:05.349721 | orchestrator | 2026-03-17 00:47:05.349730 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:47:05.349738 | orchestrator | Tuesday 17 March 2026 00:47:04 +0000 (0:00:00.409) 0:00:53.384 ********* 2026-03-17 00:47:05.349747 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_fe0d5661-edac-468e-9d1d-014c3e419a65) 2026-03-17 00:47:05.349755 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_fe0d5661-edac-468e-9d1d-014c3e419a65) 2026-03-17 00:47:05.349764 | orchestrator | 2026-03-17 00:47:05.349772 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:47:05.349781 | orchestrator | Tuesday 17 March 2026 00:47:04 +0000 (0:00:00.467) 0:00:53.852 ********* 2026-03-17 00:47:05.349790 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-17 00:47:05.349799 | orchestrator | 2026-03-17 00:47:05.349807 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:47:05.349816 | orchestrator | Tuesday 17 March 2026 00:47:05 +0000 (0:00:00.340) 0:00:54.192 ********* 2026-03-17 00:47:05.349824 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-03-17 00:47:05.349833 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-03-17 00:47:05.349841 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-03-17 00:47:05.349850 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-03-17 00:47:05.349858 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-03-17 00:47:05.349867 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-03-17 00:47:05.349875 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-03-17 00:47:05.349884 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-03-17 00:47:05.349892 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-03-17 00:47:05.349906 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-03-17 00:47:05.349914 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-03-17 00:47:05.349928 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-03-17 00:47:14.610141 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-03-17 00:47:14.610261 | orchestrator | 2026-03-17 00:47:14.610276 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:47:14.610287 | orchestrator | Tuesday 17 March 2026 00:47:05 +0000 (0:00:00.416) 0:00:54.609 ********* 2026-03-17 00:47:14.610310 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:14.611149 | orchestrator | 2026-03-17 00:47:14.611172 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:47:14.611186 | orchestrator | Tuesday 17 March 2026 00:47:05 +0000 (0:00:00.227) 0:00:54.837 ********* 2026-03-17 00:47:14.611197 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:14.611215 | orchestrator | 2026-03-17 00:47:14.611233 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:47:14.611251 | orchestrator | Tuesday 17 March 2026 00:47:05 +0000 (0:00:00.209) 0:00:55.046 ********* 2026-03-17 00:47:14.611290 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:14.611319 | orchestrator | 2026-03-17 00:47:14.611330 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:47:14.611394 | orchestrator | Tuesday 17 March 2026 00:47:06 +0000 (0:00:00.690) 0:00:55.737 ********* 2026-03-17 00:47:14.611407 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:14.611418 | orchestrator | 2026-03-17 00:47:14.611429 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:47:14.611440 | orchestrator | Tuesday 17 March 2026 00:47:06 +0000 (0:00:00.205) 0:00:55.943 ********* 2026-03-17 00:47:14.611451 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:14.611462 | orchestrator | 2026-03-17 00:47:14.611473 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:47:14.611484 | orchestrator | Tuesday 17 March 2026 00:47:06 +0000 (0:00:00.200) 0:00:56.143 ********* 2026-03-17 00:47:14.611495 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:14.611506 | orchestrator | 2026-03-17 00:47:14.611517 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:47:14.611527 | orchestrator | Tuesday 17 March 2026 00:47:07 +0000 (0:00:00.199) 0:00:56.343 ********* 2026-03-17 00:47:14.611538 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:14.611549 | orchestrator | 2026-03-17 00:47:14.611560 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:47:14.611570 | orchestrator | Tuesday 17 March 2026 00:47:07 +0000 (0:00:00.204) 0:00:56.547 ********* 2026-03-17 00:47:14.611581 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:14.611592 | orchestrator | 2026-03-17 00:47:14.611603 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:47:14.611614 | orchestrator | Tuesday 17 March 2026 00:47:07 +0000 (0:00:00.193) 0:00:56.741 ********* 2026-03-17 00:47:14.611625 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-03-17 00:47:14.611636 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-03-17 00:47:14.611648 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-03-17 00:47:14.611658 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-03-17 00:47:14.611669 | orchestrator | 2026-03-17 00:47:14.611680 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:47:14.611691 | orchestrator | Tuesday 17 March 2026 00:47:08 +0000 (0:00:00.680) 0:00:57.422 ********* 2026-03-17 00:47:14.611702 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:14.611712 | orchestrator | 2026-03-17 00:47:14.611723 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:47:14.611757 | orchestrator | Tuesday 17 March 2026 00:47:08 +0000 (0:00:00.214) 0:00:57.636 ********* 2026-03-17 00:47:14.611768 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:14.611779 | orchestrator | 2026-03-17 00:47:14.611789 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:47:14.611800 | orchestrator | Tuesday 17 March 2026 00:47:08 +0000 (0:00:00.241) 0:00:57.878 ********* 2026-03-17 00:47:14.611811 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:14.611822 | orchestrator | 2026-03-17 00:47:14.611832 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:47:14.611843 | orchestrator | Tuesday 17 March 2026 00:47:08 +0000 (0:00:00.257) 0:00:58.135 ********* 2026-03-17 00:47:14.611853 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:14.611864 | orchestrator | 2026-03-17 00:47:14.611875 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-17 00:47:14.611886 | orchestrator | Tuesday 17 March 2026 00:47:09 +0000 (0:00:00.236) 0:00:58.372 ********* 2026-03-17 00:47:14.611896 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:14.611907 | orchestrator | 2026-03-17 00:47:14.611918 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-17 00:47:14.611928 | orchestrator | Tuesday 17 March 2026 00:47:09 +0000 (0:00:00.411) 0:00:58.783 ********* 2026-03-17 00:47:14.611939 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3c41c00e-01b2-5de9-9d7e-31888b7f9771'}}) 2026-03-17 00:47:14.611950 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b1b21aa2-16de-5cd3-9497-37bc0f66c5a5'}}) 2026-03-17 00:47:14.611961 | orchestrator | 2026-03-17 00:47:14.611972 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-17 00:47:14.611983 | orchestrator | Tuesday 17 March 2026 00:47:09 +0000 (0:00:00.217) 0:00:59.001 ********* 2026-03-17 00:47:14.611996 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-3c41c00e-01b2-5de9-9d7e-31888b7f9771', 'data_vg': 'ceph-3c41c00e-01b2-5de9-9d7e-31888b7f9771'}) 2026-03-17 00:47:14.612008 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-b1b21aa2-16de-5cd3-9497-37bc0f66c5a5', 'data_vg': 'ceph-b1b21aa2-16de-5cd3-9497-37bc0f66c5a5'}) 2026-03-17 00:47:14.612019 | orchestrator | 2026-03-17 00:47:14.612030 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-17 00:47:14.612061 | orchestrator | Tuesday 17 March 2026 00:47:11 +0000 (0:00:01.907) 0:01:00.909 ********* 2026-03-17 00:47:14.612072 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3c41c00e-01b2-5de9-9d7e-31888b7f9771', 'data_vg': 'ceph-3c41c00e-01b2-5de9-9d7e-31888b7f9771'})  2026-03-17 00:47:14.612085 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b1b21aa2-16de-5cd3-9497-37bc0f66c5a5', 'data_vg': 'ceph-b1b21aa2-16de-5cd3-9497-37bc0f66c5a5'})  2026-03-17 00:47:14.612095 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:14.612106 | orchestrator | 2026-03-17 00:47:14.612117 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-17 00:47:14.612128 | orchestrator | Tuesday 17 March 2026 00:47:11 +0000 (0:00:00.162) 0:01:01.073 ********* 2026-03-17 00:47:14.612139 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-3c41c00e-01b2-5de9-9d7e-31888b7f9771', 'data_vg': 'ceph-3c41c00e-01b2-5de9-9d7e-31888b7f9771'}) 2026-03-17 00:47:14.612150 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-b1b21aa2-16de-5cd3-9497-37bc0f66c5a5', 'data_vg': 'ceph-b1b21aa2-16de-5cd3-9497-37bc0f66c5a5'}) 2026-03-17 00:47:14.612160 | orchestrator | 2026-03-17 00:47:14.612171 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-17 00:47:14.612182 | orchestrator | Tuesday 17 March 2026 00:47:13 +0000 (0:00:01.453) 0:01:02.526 ********* 2026-03-17 00:47:14.612193 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3c41c00e-01b2-5de9-9d7e-31888b7f9771', 'data_vg': 'ceph-3c41c00e-01b2-5de9-9d7e-31888b7f9771'})  2026-03-17 00:47:14.612212 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b1b21aa2-16de-5cd3-9497-37bc0f66c5a5', 'data_vg': 'ceph-b1b21aa2-16de-5cd3-9497-37bc0f66c5a5'})  2026-03-17 00:47:14.612223 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:14.612234 | orchestrator | 2026-03-17 00:47:14.612245 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-17 00:47:14.612256 | orchestrator | Tuesday 17 March 2026 00:47:13 +0000 (0:00:00.145) 0:01:02.671 ********* 2026-03-17 00:47:14.612266 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:14.612277 | orchestrator | 2026-03-17 00:47:14.612288 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-17 00:47:14.612298 | orchestrator | Tuesday 17 March 2026 00:47:13 +0000 (0:00:00.131) 0:01:02.803 ********* 2026-03-17 00:47:14.612309 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3c41c00e-01b2-5de9-9d7e-31888b7f9771', 'data_vg': 'ceph-3c41c00e-01b2-5de9-9d7e-31888b7f9771'})  2026-03-17 00:47:14.612320 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b1b21aa2-16de-5cd3-9497-37bc0f66c5a5', 'data_vg': 'ceph-b1b21aa2-16de-5cd3-9497-37bc0f66c5a5'})  2026-03-17 00:47:14.612331 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:14.612361 | orchestrator | 2026-03-17 00:47:14.612372 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-17 00:47:14.612383 | orchestrator | Tuesday 17 March 2026 00:47:13 +0000 (0:00:00.154) 0:01:02.957 ********* 2026-03-17 00:47:14.612394 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:14.612404 | orchestrator | 2026-03-17 00:47:14.612415 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-17 00:47:14.612434 | orchestrator | Tuesday 17 March 2026 00:47:13 +0000 (0:00:00.143) 0:01:03.101 ********* 2026-03-17 00:47:14.612446 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3c41c00e-01b2-5de9-9d7e-31888b7f9771', 'data_vg': 'ceph-3c41c00e-01b2-5de9-9d7e-31888b7f9771'})  2026-03-17 00:47:14.612457 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b1b21aa2-16de-5cd3-9497-37bc0f66c5a5', 'data_vg': 'ceph-b1b21aa2-16de-5cd3-9497-37bc0f66c5a5'})  2026-03-17 00:47:14.612468 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:14.612479 | orchestrator | 2026-03-17 00:47:14.612489 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-17 00:47:14.612500 | orchestrator | Tuesday 17 March 2026 00:47:14 +0000 (0:00:00.161) 0:01:03.262 ********* 2026-03-17 00:47:14.612511 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:14.612521 | orchestrator | 2026-03-17 00:47:14.612532 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-17 00:47:14.612543 | orchestrator | Tuesday 17 March 2026 00:47:14 +0000 (0:00:00.145) 0:01:03.407 ********* 2026-03-17 00:47:14.612554 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3c41c00e-01b2-5de9-9d7e-31888b7f9771', 'data_vg': 'ceph-3c41c00e-01b2-5de9-9d7e-31888b7f9771'})  2026-03-17 00:47:14.612565 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b1b21aa2-16de-5cd3-9497-37bc0f66c5a5', 'data_vg': 'ceph-b1b21aa2-16de-5cd3-9497-37bc0f66c5a5'})  2026-03-17 00:47:14.612576 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:14.612586 | orchestrator | 2026-03-17 00:47:14.612597 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-17 00:47:14.612608 | orchestrator | Tuesday 17 March 2026 00:47:14 +0000 (0:00:00.167) 0:01:03.575 ********* 2026-03-17 00:47:14.612619 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:47:14.612629 | orchestrator | 2026-03-17 00:47:14.612640 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-17 00:47:14.612654 | orchestrator | Tuesday 17 March 2026 00:47:14 +0000 (0:00:00.135) 0:01:03.710 ********* 2026-03-17 00:47:14.612682 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3c41c00e-01b2-5de9-9d7e-31888b7f9771', 'data_vg': 'ceph-3c41c00e-01b2-5de9-9d7e-31888b7f9771'})  2026-03-17 00:47:21.194629 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b1b21aa2-16de-5cd3-9497-37bc0f66c5a5', 'data_vg': 'ceph-b1b21aa2-16de-5cd3-9497-37bc0f66c5a5'})  2026-03-17 00:47:21.194723 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:21.194733 | orchestrator | 2026-03-17 00:47:21.194740 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-17 00:47:21.194748 | orchestrator | Tuesday 17 March 2026 00:47:14 +0000 (0:00:00.388) 0:01:04.099 ********* 2026-03-17 00:47:21.194754 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3c41c00e-01b2-5de9-9d7e-31888b7f9771', 'data_vg': 'ceph-3c41c00e-01b2-5de9-9d7e-31888b7f9771'})  2026-03-17 00:47:21.194760 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b1b21aa2-16de-5cd3-9497-37bc0f66c5a5', 'data_vg': 'ceph-b1b21aa2-16de-5cd3-9497-37bc0f66c5a5'})  2026-03-17 00:47:21.194767 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:21.194772 | orchestrator | 2026-03-17 00:47:21.194792 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-17 00:47:21.194798 | orchestrator | Tuesday 17 March 2026 00:47:15 +0000 (0:00:00.204) 0:01:04.303 ********* 2026-03-17 00:47:21.194804 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3c41c00e-01b2-5de9-9d7e-31888b7f9771', 'data_vg': 'ceph-3c41c00e-01b2-5de9-9d7e-31888b7f9771'})  2026-03-17 00:47:21.194810 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b1b21aa2-16de-5cd3-9497-37bc0f66c5a5', 'data_vg': 'ceph-b1b21aa2-16de-5cd3-9497-37bc0f66c5a5'})  2026-03-17 00:47:21.194817 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:21.194823 | orchestrator | 2026-03-17 00:47:21.194829 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-17 00:47:21.194835 | orchestrator | Tuesday 17 March 2026 00:47:15 +0000 (0:00:00.151) 0:01:04.455 ********* 2026-03-17 00:47:21.194841 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:21.194847 | orchestrator | 2026-03-17 00:47:21.194852 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-17 00:47:21.194859 | orchestrator | Tuesday 17 March 2026 00:47:15 +0000 (0:00:00.139) 0:01:04.594 ********* 2026-03-17 00:47:21.194865 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:21.194871 | orchestrator | 2026-03-17 00:47:21.194877 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-17 00:47:21.194883 | orchestrator | Tuesday 17 March 2026 00:47:15 +0000 (0:00:00.140) 0:01:04.734 ********* 2026-03-17 00:47:21.194889 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:21.194896 | orchestrator | 2026-03-17 00:47:21.194902 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-17 00:47:21.194909 | orchestrator | Tuesday 17 March 2026 00:47:15 +0000 (0:00:00.151) 0:01:04.886 ********* 2026-03-17 00:47:21.194915 | orchestrator | ok: [testbed-node-5] => { 2026-03-17 00:47:21.194922 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-17 00:47:21.194928 | orchestrator | } 2026-03-17 00:47:21.194935 | orchestrator | 2026-03-17 00:47:21.194941 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-17 00:47:21.194947 | orchestrator | Tuesday 17 March 2026 00:47:15 +0000 (0:00:00.150) 0:01:05.037 ********* 2026-03-17 00:47:21.194953 | orchestrator | ok: [testbed-node-5] => { 2026-03-17 00:47:21.194958 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-17 00:47:21.194965 | orchestrator | } 2026-03-17 00:47:21.194971 | orchestrator | 2026-03-17 00:47:21.194977 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-17 00:47:21.194983 | orchestrator | Tuesday 17 March 2026 00:47:16 +0000 (0:00:00.151) 0:01:05.188 ********* 2026-03-17 00:47:21.194990 | orchestrator | ok: [testbed-node-5] => { 2026-03-17 00:47:21.194996 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-17 00:47:21.195002 | orchestrator | } 2026-03-17 00:47:21.195008 | orchestrator | 2026-03-17 00:47:21.195014 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-17 00:47:21.195021 | orchestrator | Tuesday 17 March 2026 00:47:16 +0000 (0:00:00.189) 0:01:05.378 ********* 2026-03-17 00:47:21.195045 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:47:21.195051 | orchestrator | 2026-03-17 00:47:21.195057 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-17 00:47:21.195063 | orchestrator | Tuesday 17 March 2026 00:47:16 +0000 (0:00:00.554) 0:01:05.932 ********* 2026-03-17 00:47:21.195069 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:47:21.195074 | orchestrator | 2026-03-17 00:47:21.195080 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-17 00:47:21.195086 | orchestrator | Tuesday 17 March 2026 00:47:17 +0000 (0:00:00.620) 0:01:06.553 ********* 2026-03-17 00:47:21.195092 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:47:21.195098 | orchestrator | 2026-03-17 00:47:21.195104 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-17 00:47:21.195109 | orchestrator | Tuesday 17 March 2026 00:47:17 +0000 (0:00:00.574) 0:01:07.128 ********* 2026-03-17 00:47:21.195115 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:47:21.195121 | orchestrator | 2026-03-17 00:47:21.195127 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-17 00:47:21.195133 | orchestrator | Tuesday 17 March 2026 00:47:18 +0000 (0:00:00.430) 0:01:07.558 ********* 2026-03-17 00:47:21.195139 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:21.195145 | orchestrator | 2026-03-17 00:47:21.195151 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-17 00:47:21.195157 | orchestrator | Tuesday 17 March 2026 00:47:18 +0000 (0:00:00.127) 0:01:07.686 ********* 2026-03-17 00:47:21.195163 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:21.195169 | orchestrator | 2026-03-17 00:47:21.195174 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-17 00:47:21.195181 | orchestrator | Tuesday 17 March 2026 00:47:18 +0000 (0:00:00.109) 0:01:07.795 ********* 2026-03-17 00:47:21.195188 | orchestrator | ok: [testbed-node-5] => { 2026-03-17 00:47:21.195194 | orchestrator |  "vgs_report": { 2026-03-17 00:47:21.195201 | orchestrator |  "vg": [] 2026-03-17 00:47:21.195223 | orchestrator |  } 2026-03-17 00:47:21.195230 | orchestrator | } 2026-03-17 00:47:21.195236 | orchestrator | 2026-03-17 00:47:21.195242 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-17 00:47:21.195248 | orchestrator | Tuesday 17 March 2026 00:47:18 +0000 (0:00:00.145) 0:01:07.941 ********* 2026-03-17 00:47:21.195254 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:21.195260 | orchestrator | 2026-03-17 00:47:21.195266 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-17 00:47:21.195272 | orchestrator | Tuesday 17 March 2026 00:47:18 +0000 (0:00:00.132) 0:01:08.074 ********* 2026-03-17 00:47:21.195278 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:21.195284 | orchestrator | 2026-03-17 00:47:21.195290 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-17 00:47:21.195296 | orchestrator | Tuesday 17 March 2026 00:47:19 +0000 (0:00:00.141) 0:01:08.216 ********* 2026-03-17 00:47:21.195301 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:21.195307 | orchestrator | 2026-03-17 00:47:21.195313 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-17 00:47:21.195324 | orchestrator | Tuesday 17 March 2026 00:47:19 +0000 (0:00:00.144) 0:01:08.360 ********* 2026-03-17 00:47:21.195330 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:21.195335 | orchestrator | 2026-03-17 00:47:21.195341 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-17 00:47:21.195346 | orchestrator | Tuesday 17 March 2026 00:47:19 +0000 (0:00:00.115) 0:01:08.476 ********* 2026-03-17 00:47:21.195351 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:21.195356 | orchestrator | 2026-03-17 00:47:21.195384 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-17 00:47:21.195391 | orchestrator | Tuesday 17 March 2026 00:47:19 +0000 (0:00:00.127) 0:01:08.603 ********* 2026-03-17 00:47:21.195397 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:21.195410 | orchestrator | 2026-03-17 00:47:21.195416 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-17 00:47:21.195422 | orchestrator | Tuesday 17 March 2026 00:47:19 +0000 (0:00:00.125) 0:01:08.729 ********* 2026-03-17 00:47:21.195428 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:21.195434 | orchestrator | 2026-03-17 00:47:21.195440 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-17 00:47:21.195446 | orchestrator | Tuesday 17 March 2026 00:47:19 +0000 (0:00:00.144) 0:01:08.873 ********* 2026-03-17 00:47:21.195451 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:21.195457 | orchestrator | 2026-03-17 00:47:21.195463 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-17 00:47:21.195469 | orchestrator | Tuesday 17 March 2026 00:47:19 +0000 (0:00:00.133) 0:01:09.007 ********* 2026-03-17 00:47:21.195475 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:21.195481 | orchestrator | 2026-03-17 00:47:21.195486 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-17 00:47:21.195493 | orchestrator | Tuesday 17 March 2026 00:47:20 +0000 (0:00:00.359) 0:01:09.366 ********* 2026-03-17 00:47:21.195499 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:21.195505 | orchestrator | 2026-03-17 00:47:21.195511 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-17 00:47:21.195516 | orchestrator | Tuesday 17 March 2026 00:47:20 +0000 (0:00:00.149) 0:01:09.515 ********* 2026-03-17 00:47:21.195523 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:21.195529 | orchestrator | 2026-03-17 00:47:21.195534 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-17 00:47:21.195540 | orchestrator | Tuesday 17 March 2026 00:47:20 +0000 (0:00:00.145) 0:01:09.660 ********* 2026-03-17 00:47:21.195546 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:21.195552 | orchestrator | 2026-03-17 00:47:21.195558 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-17 00:47:21.195564 | orchestrator | Tuesday 17 March 2026 00:47:20 +0000 (0:00:00.129) 0:01:09.789 ********* 2026-03-17 00:47:21.195570 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:21.195576 | orchestrator | 2026-03-17 00:47:21.195582 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-17 00:47:21.195588 | orchestrator | Tuesday 17 March 2026 00:47:20 +0000 (0:00:00.122) 0:01:09.912 ********* 2026-03-17 00:47:21.195594 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:21.195600 | orchestrator | 2026-03-17 00:47:21.195606 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-17 00:47:21.195612 | orchestrator | Tuesday 17 March 2026 00:47:20 +0000 (0:00:00.137) 0:01:10.049 ********* 2026-03-17 00:47:21.195618 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3c41c00e-01b2-5de9-9d7e-31888b7f9771', 'data_vg': 'ceph-3c41c00e-01b2-5de9-9d7e-31888b7f9771'})  2026-03-17 00:47:21.195625 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b1b21aa2-16de-5cd3-9497-37bc0f66c5a5', 'data_vg': 'ceph-b1b21aa2-16de-5cd3-9497-37bc0f66c5a5'})  2026-03-17 00:47:21.195631 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:21.195637 | orchestrator | 2026-03-17 00:47:21.195643 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-17 00:47:21.195649 | orchestrator | Tuesday 17 March 2026 00:47:21 +0000 (0:00:00.135) 0:01:10.184 ********* 2026-03-17 00:47:21.195655 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3c41c00e-01b2-5de9-9d7e-31888b7f9771', 'data_vg': 'ceph-3c41c00e-01b2-5de9-9d7e-31888b7f9771'})  2026-03-17 00:47:21.195661 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b1b21aa2-16de-5cd3-9497-37bc0f66c5a5', 'data_vg': 'ceph-b1b21aa2-16de-5cd3-9497-37bc0f66c5a5'})  2026-03-17 00:47:21.195667 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:21.195673 | orchestrator | 2026-03-17 00:47:21.195679 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-17 00:47:21.195691 | orchestrator | Tuesday 17 March 2026 00:47:21 +0000 (0:00:00.131) 0:01:10.315 ********* 2026-03-17 00:47:21.195704 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3c41c00e-01b2-5de9-9d7e-31888b7f9771', 'data_vg': 'ceph-3c41c00e-01b2-5de9-9d7e-31888b7f9771'})  2026-03-17 00:47:24.106089 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b1b21aa2-16de-5cd3-9497-37bc0f66c5a5', 'data_vg': 'ceph-b1b21aa2-16de-5cd3-9497-37bc0f66c5a5'})  2026-03-17 00:47:24.106183 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:24.106206 | orchestrator | 2026-03-17 00:47:24.106223 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-17 00:47:24.106233 | orchestrator | Tuesday 17 March 2026 00:47:21 +0000 (0:00:00.137) 0:01:10.453 ********* 2026-03-17 00:47:24.106240 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3c41c00e-01b2-5de9-9d7e-31888b7f9771', 'data_vg': 'ceph-3c41c00e-01b2-5de9-9d7e-31888b7f9771'})  2026-03-17 00:47:24.106262 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b1b21aa2-16de-5cd3-9497-37bc0f66c5a5', 'data_vg': 'ceph-b1b21aa2-16de-5cd3-9497-37bc0f66c5a5'})  2026-03-17 00:47:24.106270 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:24.106277 | orchestrator | 2026-03-17 00:47:24.106284 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-17 00:47:24.106292 | orchestrator | Tuesday 17 March 2026 00:47:21 +0000 (0:00:00.138) 0:01:10.592 ********* 2026-03-17 00:47:24.106299 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3c41c00e-01b2-5de9-9d7e-31888b7f9771', 'data_vg': 'ceph-3c41c00e-01b2-5de9-9d7e-31888b7f9771'})  2026-03-17 00:47:24.106307 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b1b21aa2-16de-5cd3-9497-37bc0f66c5a5', 'data_vg': 'ceph-b1b21aa2-16de-5cd3-9497-37bc0f66c5a5'})  2026-03-17 00:47:24.106313 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:24.106321 | orchestrator | 2026-03-17 00:47:24.106328 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-17 00:47:24.106335 | orchestrator | Tuesday 17 March 2026 00:47:21 +0000 (0:00:00.150) 0:01:10.742 ********* 2026-03-17 00:47:24.106343 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3c41c00e-01b2-5de9-9d7e-31888b7f9771', 'data_vg': 'ceph-3c41c00e-01b2-5de9-9d7e-31888b7f9771'})  2026-03-17 00:47:24.106350 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b1b21aa2-16de-5cd3-9497-37bc0f66c5a5', 'data_vg': 'ceph-b1b21aa2-16de-5cd3-9497-37bc0f66c5a5'})  2026-03-17 00:47:24.106357 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:24.106364 | orchestrator | 2026-03-17 00:47:24.106413 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-17 00:47:24.106421 | orchestrator | Tuesday 17 March 2026 00:47:21 +0000 (0:00:00.159) 0:01:10.902 ********* 2026-03-17 00:47:24.106427 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3c41c00e-01b2-5de9-9d7e-31888b7f9771', 'data_vg': 'ceph-3c41c00e-01b2-5de9-9d7e-31888b7f9771'})  2026-03-17 00:47:24.106433 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b1b21aa2-16de-5cd3-9497-37bc0f66c5a5', 'data_vg': 'ceph-b1b21aa2-16de-5cd3-9497-37bc0f66c5a5'})  2026-03-17 00:47:24.106438 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:24.106444 | orchestrator | 2026-03-17 00:47:24.106451 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-17 00:47:24.106458 | orchestrator | Tuesday 17 March 2026 00:47:22 +0000 (0:00:00.301) 0:01:11.204 ********* 2026-03-17 00:47:24.106466 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3c41c00e-01b2-5de9-9d7e-31888b7f9771', 'data_vg': 'ceph-3c41c00e-01b2-5de9-9d7e-31888b7f9771'})  2026-03-17 00:47:24.106473 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b1b21aa2-16de-5cd3-9497-37bc0f66c5a5', 'data_vg': 'ceph-b1b21aa2-16de-5cd3-9497-37bc0f66c5a5'})  2026-03-17 00:47:24.106480 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:24.106506 | orchestrator | 2026-03-17 00:47:24.106514 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-17 00:47:24.106521 | orchestrator | Tuesday 17 March 2026 00:47:22 +0000 (0:00:00.131) 0:01:11.335 ********* 2026-03-17 00:47:24.106529 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:47:24.106537 | orchestrator | 2026-03-17 00:47:24.106544 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-17 00:47:24.106551 | orchestrator | Tuesday 17 March 2026 00:47:22 +0000 (0:00:00.513) 0:01:11.849 ********* 2026-03-17 00:47:24.106558 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:47:24.106565 | orchestrator | 2026-03-17 00:47:24.106572 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-17 00:47:24.106579 | orchestrator | Tuesday 17 March 2026 00:47:23 +0000 (0:00:00.575) 0:01:12.424 ********* 2026-03-17 00:47:24.106586 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:47:24.106593 | orchestrator | 2026-03-17 00:47:24.106601 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-17 00:47:24.106608 | orchestrator | Tuesday 17 March 2026 00:47:23 +0000 (0:00:00.125) 0:01:12.550 ********* 2026-03-17 00:47:24.106615 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-3c41c00e-01b2-5de9-9d7e-31888b7f9771', 'vg_name': 'ceph-3c41c00e-01b2-5de9-9d7e-31888b7f9771'}) 2026-03-17 00:47:24.106623 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-b1b21aa2-16de-5cd3-9497-37bc0f66c5a5', 'vg_name': 'ceph-b1b21aa2-16de-5cd3-9497-37bc0f66c5a5'}) 2026-03-17 00:47:24.106630 | orchestrator | 2026-03-17 00:47:24.106637 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-17 00:47:24.106645 | orchestrator | Tuesday 17 March 2026 00:47:23 +0000 (0:00:00.137) 0:01:12.688 ********* 2026-03-17 00:47:24.106667 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3c41c00e-01b2-5de9-9d7e-31888b7f9771', 'data_vg': 'ceph-3c41c00e-01b2-5de9-9d7e-31888b7f9771'})  2026-03-17 00:47:24.106675 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b1b21aa2-16de-5cd3-9497-37bc0f66c5a5', 'data_vg': 'ceph-b1b21aa2-16de-5cd3-9497-37bc0f66c5a5'})  2026-03-17 00:47:24.106682 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:24.106689 | orchestrator | 2026-03-17 00:47:24.106696 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-17 00:47:24.106702 | orchestrator | Tuesday 17 March 2026 00:47:23 +0000 (0:00:00.140) 0:01:12.828 ********* 2026-03-17 00:47:24.106710 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3c41c00e-01b2-5de9-9d7e-31888b7f9771', 'data_vg': 'ceph-3c41c00e-01b2-5de9-9d7e-31888b7f9771'})  2026-03-17 00:47:24.106717 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b1b21aa2-16de-5cd3-9497-37bc0f66c5a5', 'data_vg': 'ceph-b1b21aa2-16de-5cd3-9497-37bc0f66c5a5'})  2026-03-17 00:47:24.106724 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:24.106731 | orchestrator | 2026-03-17 00:47:24.106739 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-17 00:47:24.106746 | orchestrator | Tuesday 17 March 2026 00:47:23 +0000 (0:00:00.147) 0:01:12.975 ********* 2026-03-17 00:47:24.106753 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3c41c00e-01b2-5de9-9d7e-31888b7f9771', 'data_vg': 'ceph-3c41c00e-01b2-5de9-9d7e-31888b7f9771'})  2026-03-17 00:47:24.106760 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b1b21aa2-16de-5cd3-9497-37bc0f66c5a5', 'data_vg': 'ceph-b1b21aa2-16de-5cd3-9497-37bc0f66c5a5'})  2026-03-17 00:47:24.106767 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:24.106774 | orchestrator | 2026-03-17 00:47:24.106781 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-17 00:47:24.106788 | orchestrator | Tuesday 17 March 2026 00:47:23 +0000 (0:00:00.150) 0:01:13.126 ********* 2026-03-17 00:47:24.106795 | orchestrator | ok: [testbed-node-5] => { 2026-03-17 00:47:24.106802 | orchestrator |  "lvm_report": { 2026-03-17 00:47:24.106810 | orchestrator |  "lv": [ 2026-03-17 00:47:24.106823 | orchestrator |  { 2026-03-17 00:47:24.106830 | orchestrator |  "lv_name": "osd-block-3c41c00e-01b2-5de9-9d7e-31888b7f9771", 2026-03-17 00:47:24.106839 | orchestrator |  "vg_name": "ceph-3c41c00e-01b2-5de9-9d7e-31888b7f9771" 2026-03-17 00:47:24.106846 | orchestrator |  }, 2026-03-17 00:47:24.106853 | orchestrator |  { 2026-03-17 00:47:24.106860 | orchestrator |  "lv_name": "osd-block-b1b21aa2-16de-5cd3-9497-37bc0f66c5a5", 2026-03-17 00:47:24.106867 | orchestrator |  "vg_name": "ceph-b1b21aa2-16de-5cd3-9497-37bc0f66c5a5" 2026-03-17 00:47:24.106874 | orchestrator |  } 2026-03-17 00:47:24.106881 | orchestrator |  ], 2026-03-17 00:47:24.106889 | orchestrator |  "pv": [ 2026-03-17 00:47:24.106896 | orchestrator |  { 2026-03-17 00:47:24.106903 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-17 00:47:24.106909 | orchestrator |  "vg_name": "ceph-3c41c00e-01b2-5de9-9d7e-31888b7f9771" 2026-03-17 00:47:24.106915 | orchestrator |  }, 2026-03-17 00:47:24.106922 | orchestrator |  { 2026-03-17 00:47:24.106927 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-17 00:47:24.106933 | orchestrator |  "vg_name": "ceph-b1b21aa2-16de-5cd3-9497-37bc0f66c5a5" 2026-03-17 00:47:24.106940 | orchestrator |  } 2026-03-17 00:47:24.106947 | orchestrator |  ] 2026-03-17 00:47:24.106954 | orchestrator |  } 2026-03-17 00:47:24.106962 | orchestrator | } 2026-03-17 00:47:24.106970 | orchestrator | 2026-03-17 00:47:24.106976 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:47:24.106983 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-17 00:47:24.106991 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-17 00:47:24.106998 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-17 00:47:24.107005 | orchestrator | 2026-03-17 00:47:24.107013 | orchestrator | 2026-03-17 00:47:24.107019 | orchestrator | 2026-03-17 00:47:24.107035 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:47:24.107042 | orchestrator | Tuesday 17 March 2026 00:47:24 +0000 (0:00:00.148) 0:01:13.274 ********* 2026-03-17 00:47:24.107049 | orchestrator | =============================================================================== 2026-03-17 00:47:24.107056 | orchestrator | Create block VGs -------------------------------------------------------- 5.77s 2026-03-17 00:47:24.107063 | orchestrator | Create block LVs -------------------------------------------------------- 4.22s 2026-03-17 00:47:24.107070 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.88s 2026-03-17 00:47:24.107078 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.76s 2026-03-17 00:47:24.107085 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.73s 2026-03-17 00:47:24.107092 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.66s 2026-03-17 00:47:24.107099 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.60s 2026-03-17 00:47:24.107106 | orchestrator | Add known partitions to the list of available block devices ------------- 1.45s 2026-03-17 00:47:24.107120 | orchestrator | Add known links to the list of available block devices ------------------ 1.22s 2026-03-17 00:47:24.468936 | orchestrator | Add known partitions to the list of available block devices ------------- 1.08s 2026-03-17 00:47:24.469009 | orchestrator | Print LVM report data --------------------------------------------------- 0.96s 2026-03-17 00:47:24.469015 | orchestrator | Print 'Create WAL LVs for ceph_wal_devices' ----------------------------- 0.88s 2026-03-17 00:47:24.469020 | orchestrator | Add known partitions to the list of available block devices ------------- 0.87s 2026-03-17 00:47:24.469025 | orchestrator | Add known links to the list of available block devices ------------------ 0.80s 2026-03-17 00:47:24.469050 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.77s 2026-03-17 00:47:24.469056 | orchestrator | Combine JSON from _db/wal/db_wal_vgs_cmd_output ------------------------- 0.74s 2026-03-17 00:47:24.469070 | orchestrator | Print 'Create DB VGs' --------------------------------------------------- 0.73s 2026-03-17 00:47:24.469075 | orchestrator | Fail if DB LV defined in lvm_volumes is missing ------------------------- 0.72s 2026-03-17 00:47:24.469080 | orchestrator | Add known links to the list of available block devices ------------------ 0.70s 2026-03-17 00:47:24.469084 | orchestrator | Add known links to the list of available block devices ------------------ 0.70s 2026-03-17 00:47:36.062883 | orchestrator | 2026-03-17 00:47:36 | INFO  | Prepare task for execution of facts. 2026-03-17 00:47:36.140629 | orchestrator | 2026-03-17 00:47:36 | INFO  | Task 34704f66-cffe-4e49-ba3b-179ffc58816c (facts) was prepared for execution. 2026-03-17 00:47:36.140707 | orchestrator | 2026-03-17 00:47:36 | INFO  | It takes a moment until task 34704f66-cffe-4e49-ba3b-179ffc58816c (facts) has been started and output is visible here. 2026-03-17 00:47:48.236553 | orchestrator | 2026-03-17 00:47:48.236672 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-17 00:47:48.236689 | orchestrator | 2026-03-17 00:47:48.236697 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-17 00:47:48.236705 | orchestrator | Tuesday 17 March 2026 00:47:39 +0000 (0:00:00.338) 0:00:00.338 ********* 2026-03-17 00:47:48.236712 | orchestrator | ok: [testbed-manager] 2026-03-17 00:47:48.236720 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:47:48.236727 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:47:48.236734 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:47:48.236741 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:47:48.236747 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:47:48.236754 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:47:48.236761 | orchestrator | 2026-03-17 00:47:48.236768 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-17 00:47:48.236774 | orchestrator | Tuesday 17 March 2026 00:47:40 +0000 (0:00:01.298) 0:00:01.637 ********* 2026-03-17 00:47:48.236781 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:47:48.236789 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:47:48.236796 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:47:48.236802 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:47:48.236809 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:47:48.236816 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:47:48.236822 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:48.236829 | orchestrator | 2026-03-17 00:47:48.236836 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-17 00:47:48.236842 | orchestrator | 2026-03-17 00:47:48.236849 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-17 00:47:48.236856 | orchestrator | Tuesday 17 March 2026 00:47:41 +0000 (0:00:01.170) 0:00:02.808 ********* 2026-03-17 00:47:48.236863 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:47:48.236869 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:47:48.236876 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:47:48.236883 | orchestrator | ok: [testbed-manager] 2026-03-17 00:47:48.236889 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:47:48.236896 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:47:48.236903 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:47:48.236909 | orchestrator | 2026-03-17 00:47:48.236916 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-17 00:47:48.236923 | orchestrator | 2026-03-17 00:47:48.236930 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-17 00:47:48.236937 | orchestrator | Tuesday 17 March 2026 00:47:47 +0000 (0:00:05.676) 0:00:08.484 ********* 2026-03-17 00:47:48.236943 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:47:48.236950 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:47:48.236977 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:47:48.236984 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:47:48.236991 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:47:48.236997 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:47:48.237004 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:48.237010 | orchestrator | 2026-03-17 00:47:48.237017 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:47:48.237024 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:47:48.237031 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:47:48.237038 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:47:48.237045 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:47:48.237053 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:47:48.237060 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:47:48.237068 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:47:48.237075 | orchestrator | 2026-03-17 00:47:48.237083 | orchestrator | 2026-03-17 00:47:48.237090 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:47:48.237098 | orchestrator | Tuesday 17 March 2026 00:47:47 +0000 (0:00:00.459) 0:00:08.943 ********* 2026-03-17 00:47:48.237106 | orchestrator | =============================================================================== 2026-03-17 00:47:48.237113 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.68s 2026-03-17 00:47:48.237121 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.30s 2026-03-17 00:47:48.237141 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.17s 2026-03-17 00:47:48.237149 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.46s 2026-03-17 00:47:59.570610 | orchestrator | 2026-03-17 00:47:59 | INFO  | Prepare task for execution of frr. 2026-03-17 00:47:59.649916 | orchestrator | 2026-03-17 00:47:59 | INFO  | Task 21f1ae84-d566-4187-831b-addef28e3045 (frr) was prepared for execution. 2026-03-17 00:47:59.650003 | orchestrator | 2026-03-17 00:47:59 | INFO  | It takes a moment until task 21f1ae84-d566-4187-831b-addef28e3045 (frr) has been started and output is visible here. 2026-03-17 00:48:23.714841 | orchestrator | 2026-03-17 00:48:23.714938 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-03-17 00:48:23.714951 | orchestrator | 2026-03-17 00:48:23.714961 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-03-17 00:48:23.714971 | orchestrator | Tuesday 17 March 2026 00:48:02 +0000 (0:00:00.303) 0:00:00.303 ********* 2026-03-17 00:48:23.714980 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-03-17 00:48:23.714990 | orchestrator | 2026-03-17 00:48:23.714999 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-03-17 00:48:23.715008 | orchestrator | Tuesday 17 March 2026 00:48:03 +0000 (0:00:00.216) 0:00:00.519 ********* 2026-03-17 00:48:23.715017 | orchestrator | changed: [testbed-manager] 2026-03-17 00:48:23.715027 | orchestrator | 2026-03-17 00:48:23.715035 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-03-17 00:48:23.715062 | orchestrator | Tuesday 17 March 2026 00:48:04 +0000 (0:00:01.485) 0:00:02.005 ********* 2026-03-17 00:48:23.715071 | orchestrator | changed: [testbed-manager] 2026-03-17 00:48:23.715079 | orchestrator | 2026-03-17 00:48:23.715088 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-03-17 00:48:23.715097 | orchestrator | Tuesday 17 March 2026 00:48:14 +0000 (0:00:09.544) 0:00:11.549 ********* 2026-03-17 00:48:23.715106 | orchestrator | ok: [testbed-manager] 2026-03-17 00:48:23.715115 | orchestrator | 2026-03-17 00:48:23.715124 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-03-17 00:48:23.715133 | orchestrator | Tuesday 17 March 2026 00:48:15 +0000 (0:00:01.021) 0:00:12.571 ********* 2026-03-17 00:48:23.715142 | orchestrator | changed: [testbed-manager] 2026-03-17 00:48:23.715151 | orchestrator | 2026-03-17 00:48:23.715159 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-03-17 00:48:23.715168 | orchestrator | Tuesday 17 March 2026 00:48:16 +0000 (0:00:00.964) 0:00:13.535 ********* 2026-03-17 00:48:23.715176 | orchestrator | ok: [testbed-manager] 2026-03-17 00:48:23.715185 | orchestrator | 2026-03-17 00:48:23.715193 | orchestrator | TASK [osism.services.frr : Write frr_config_template to temporary file] ******** 2026-03-17 00:48:23.715202 | orchestrator | Tuesday 17 March 2026 00:48:17 +0000 (0:00:01.138) 0:00:14.673 ********* 2026-03-17 00:48:23.715211 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:48:23.715219 | orchestrator | 2026-03-17 00:48:23.715228 | orchestrator | TASK [osism.services.frr : Render frr.conf from frr_config_template variable] *** 2026-03-17 00:48:23.715236 | orchestrator | Tuesday 17 March 2026 00:48:17 +0000 (0:00:00.148) 0:00:14.822 ********* 2026-03-17 00:48:23.715245 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:48:23.715254 | orchestrator | 2026-03-17 00:48:23.715262 | orchestrator | TASK [osism.services.frr : Remove temporary frr_config_template file] ********** 2026-03-17 00:48:23.715271 | orchestrator | Tuesday 17 March 2026 00:48:17 +0000 (0:00:00.258) 0:00:15.081 ********* 2026-03-17 00:48:23.715279 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:48:23.715288 | orchestrator | 2026-03-17 00:48:23.715296 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-03-17 00:48:23.715306 | orchestrator | Tuesday 17 March 2026 00:48:17 +0000 (0:00:00.151) 0:00:15.233 ********* 2026-03-17 00:48:23.715314 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:48:23.715323 | orchestrator | 2026-03-17 00:48:23.715331 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-03-17 00:48:23.715340 | orchestrator | Tuesday 17 March 2026 00:48:17 +0000 (0:00:00.122) 0:00:15.356 ********* 2026-03-17 00:48:23.715348 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:48:23.715357 | orchestrator | 2026-03-17 00:48:23.715366 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-03-17 00:48:23.715374 | orchestrator | Tuesday 17 March 2026 00:48:18 +0000 (0:00:00.141) 0:00:15.497 ********* 2026-03-17 00:48:23.715383 | orchestrator | changed: [testbed-manager] 2026-03-17 00:48:23.715391 | orchestrator | 2026-03-17 00:48:23.715400 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-03-17 00:48:23.715409 | orchestrator | Tuesday 17 March 2026 00:48:19 +0000 (0:00:00.947) 0:00:16.445 ********* 2026-03-17 00:48:23.715417 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-03-17 00:48:23.715426 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-03-17 00:48:23.715437 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-03-17 00:48:23.715452 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-03-17 00:48:23.715465 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-03-17 00:48:23.715478 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-03-17 00:48:23.715501 | orchestrator | 2026-03-17 00:48:23.715515 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-03-17 00:48:23.715539 | orchestrator | Tuesday 17 March 2026 00:48:21 +0000 (0:00:02.045) 0:00:18.490 ********* 2026-03-17 00:48:23.715654 | orchestrator | ok: [testbed-manager] 2026-03-17 00:48:23.715671 | orchestrator | 2026-03-17 00:48:23.715685 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-03-17 00:48:23.715700 | orchestrator | Tuesday 17 March 2026 00:48:22 +0000 (0:00:01.093) 0:00:19.584 ********* 2026-03-17 00:48:23.715713 | orchestrator | changed: [testbed-manager] 2026-03-17 00:48:23.715728 | orchestrator | 2026-03-17 00:48:23.715741 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:48:23.715757 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-17 00:48:23.715771 | orchestrator | 2026-03-17 00:48:23.715786 | orchestrator | 2026-03-17 00:48:23.715823 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:48:23.715839 | orchestrator | Tuesday 17 March 2026 00:48:23 +0000 (0:00:01.274) 0:00:20.859 ********* 2026-03-17 00:48:23.715848 | orchestrator | =============================================================================== 2026-03-17 00:48:23.715857 | orchestrator | osism.services.frr : Install frr package -------------------------------- 9.54s 2026-03-17 00:48:23.715866 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.05s 2026-03-17 00:48:23.715874 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.49s 2026-03-17 00:48:23.715882 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.27s 2026-03-17 00:48:23.715891 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.14s 2026-03-17 00:48:23.715899 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.09s 2026-03-17 00:48:23.715908 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.02s 2026-03-17 00:48:23.715917 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.96s 2026-03-17 00:48:23.715925 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 0.95s 2026-03-17 00:48:23.715934 | orchestrator | osism.services.frr : Render frr.conf from frr_config_template variable --- 0.26s 2026-03-17 00:48:23.715943 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.22s 2026-03-17 00:48:23.715951 | orchestrator | osism.services.frr : Remove temporary frr_config_template file ---------- 0.15s 2026-03-17 00:48:23.715959 | orchestrator | osism.services.frr : Write frr_config_template to temporary file -------- 0.15s 2026-03-17 00:48:23.715968 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.14s 2026-03-17 00:48:23.715976 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.12s 2026-03-17 00:48:23.849614 | orchestrator | 2026-03-17 00:48:23.851875 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Tue Mar 17 00:48:23 UTC 2026 2026-03-17 00:48:23.851957 | orchestrator | 2026-03-17 00:48:24.874984 | orchestrator | 2026-03-17 00:48:24 | INFO  | Collection nutshell is prepared for execution 2026-03-17 00:48:24.980310 | orchestrator | 2026-03-17 00:48:24 | INFO  | A [0] - dotfiles 2026-03-17 00:48:35.101828 | orchestrator | 2026-03-17 00:48:35 | INFO  | A [0] - homer 2026-03-17 00:48:35.101908 | orchestrator | 2026-03-17 00:48:35 | INFO  | A [0] - netdata 2026-03-17 00:48:35.102697 | orchestrator | 2026-03-17 00:48:35 | INFO  | A [0] - openstackclient 2026-03-17 00:48:35.102754 | orchestrator | 2026-03-17 00:48:35 | INFO  | A [0] - phpmyadmin 2026-03-17 00:48:35.102980 | orchestrator | 2026-03-17 00:48:35 | INFO  | A [0] - common 2026-03-17 00:48:35.107977 | orchestrator | 2026-03-17 00:48:35 | INFO  | A [1] -- loadbalancer 2026-03-17 00:48:35.108036 | orchestrator | 2026-03-17 00:48:35 | INFO  | A [2] --- opensearch 2026-03-17 00:48:35.108241 | orchestrator | 2026-03-17 00:48:35 | INFO  | A [2] --- mariadb-ng 2026-03-17 00:48:35.108673 | orchestrator | 2026-03-17 00:48:35 | INFO  | A [3] ---- horizon 2026-03-17 00:48:35.108720 | orchestrator | 2026-03-17 00:48:35 | INFO  | A [3] ---- keystone 2026-03-17 00:48:35.109380 | orchestrator | 2026-03-17 00:48:35 | INFO  | A [4] ----- neutron 2026-03-17 00:48:35.109648 | orchestrator | 2026-03-17 00:48:35 | INFO  | A [5] ------ wait-for-nova 2026-03-17 00:48:35.109664 | orchestrator | 2026-03-17 00:48:35 | INFO  | A [6] ------- octavia 2026-03-17 00:48:35.111195 | orchestrator | 2026-03-17 00:48:35 | INFO  | A [4] ----- barbican 2026-03-17 00:48:35.111234 | orchestrator | 2026-03-17 00:48:35 | INFO  | A [4] ----- designate 2026-03-17 00:48:35.111467 | orchestrator | 2026-03-17 00:48:35 | INFO  | A [4] ----- ironic 2026-03-17 00:48:35.111562 | orchestrator | 2026-03-17 00:48:35 | INFO  | A [4] ----- placement 2026-03-17 00:48:35.111866 | orchestrator | 2026-03-17 00:48:35 | INFO  | A [4] ----- magnum 2026-03-17 00:48:35.113420 | orchestrator | 2026-03-17 00:48:35 | INFO  | A [1] -- openvswitch 2026-03-17 00:48:35.113810 | orchestrator | 2026-03-17 00:48:35 | INFO  | A [2] --- ovn 2026-03-17 00:48:35.114224 | orchestrator | 2026-03-17 00:48:35 | INFO  | A [1] -- memcached 2026-03-17 00:48:35.114263 | orchestrator | 2026-03-17 00:48:35 | INFO  | A [1] -- redis 2026-03-17 00:48:35.114270 | orchestrator | 2026-03-17 00:48:35 | INFO  | A [1] -- rabbitmq-ng 2026-03-17 00:48:35.114632 | orchestrator | 2026-03-17 00:48:35 | INFO  | A [0] - kubernetes 2026-03-17 00:48:35.117802 | orchestrator | 2026-03-17 00:48:35 | INFO  | A [1] -- kubeconfig 2026-03-17 00:48:35.117864 | orchestrator | 2026-03-17 00:48:35 | INFO  | A [1] -- copy-kubeconfig 2026-03-17 00:48:35.118229 | orchestrator | 2026-03-17 00:48:35 | INFO  | A [0] - ceph 2026-03-17 00:48:35.121144 | orchestrator | 2026-03-17 00:48:35 | INFO  | A [1] -- ceph-pools 2026-03-17 00:48:35.121203 | orchestrator | 2026-03-17 00:48:35 | INFO  | A [2] --- copy-ceph-keys 2026-03-17 00:48:35.121214 | orchestrator | 2026-03-17 00:48:35 | INFO  | A [3] ---- cephclient 2026-03-17 00:48:35.121295 | orchestrator | 2026-03-17 00:48:35 | INFO  | A [4] ----- ceph-bootstrap-dashboard 2026-03-17 00:48:35.121720 | orchestrator | 2026-03-17 00:48:35 | INFO  | A [4] ----- wait-for-keystone 2026-03-17 00:48:35.121857 | orchestrator | 2026-03-17 00:48:35 | INFO  | A [5] ------ kolla-ceph-rgw 2026-03-17 00:48:35.122055 | orchestrator | 2026-03-17 00:48:35 | INFO  | A [5] ------ glance 2026-03-17 00:48:35.122244 | orchestrator | 2026-03-17 00:48:35 | INFO  | A [5] ------ cinder 2026-03-17 00:48:35.122415 | orchestrator | 2026-03-17 00:48:35 | INFO  | A [5] ------ nova 2026-03-17 00:48:35.122746 | orchestrator | 2026-03-17 00:48:35 | INFO  | A [4] ----- prometheus 2026-03-17 00:48:35.122974 | orchestrator | 2026-03-17 00:48:35 | INFO  | A [5] ------ grafana 2026-03-17 00:48:35.354421 | orchestrator | 2026-03-17 00:48:35 | INFO  | All tasks of the collection nutshell are prepared for execution 2026-03-17 00:48:35.356754 | orchestrator | 2026-03-17 00:48:35 | INFO  | Tasks are running in the background 2026-03-17 00:48:37.148799 | orchestrator | 2026-03-17 00:48:37 | INFO  | No task IDs specified, wait for all currently running tasks 2026-03-17 00:48:39.323908 | orchestrator | 2026-03-17 00:48:39 | INFO  | Task d356b90b-1b77-4171-8013-c1d13155f73b is in state STARTED 2026-03-17 00:48:39.324919 | orchestrator | 2026-03-17 00:48:39 | INFO  | Task 7426541a-3547-4ea0-8750-36668f0e56fe is in state STARTED 2026-03-17 00:48:39.326924 | orchestrator | 2026-03-17 00:48:39 | INFO  | Task 656b6a6c-9640-45a5-aa28-d0d414ad8a4e is in state STARTED 2026-03-17 00:48:39.326970 | orchestrator | 2026-03-17 00:48:39 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:48:39.326982 | orchestrator | 2026-03-17 00:48:39 | INFO  | Task 47db3a7b-ff56-4640-83e3-1305413e4738 is in state STARTED 2026-03-17 00:48:39.327912 | orchestrator | 2026-03-17 00:48:39 | INFO  | Task 2a4fa572-deff-41c7-947d-fd4219b07b6f is in state STARTED 2026-03-17 00:48:39.330261 | orchestrator | 2026-03-17 00:48:39 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:48:39.330327 | orchestrator | 2026-03-17 00:48:39 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:48:42.363517 | orchestrator | 2026-03-17 00:48:42 | INFO  | Task d356b90b-1b77-4171-8013-c1d13155f73b is in state STARTED 2026-03-17 00:48:42.363856 | orchestrator | 2026-03-17 00:48:42 | INFO  | Task 7426541a-3547-4ea0-8750-36668f0e56fe is in state STARTED 2026-03-17 00:48:42.365431 | orchestrator | 2026-03-17 00:48:42 | INFO  | Task 656b6a6c-9640-45a5-aa28-d0d414ad8a4e is in state STARTED 2026-03-17 00:48:42.369244 | orchestrator | 2026-03-17 00:48:42 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:48:42.369672 | orchestrator | 2026-03-17 00:48:42 | INFO  | Task 47db3a7b-ff56-4640-83e3-1305413e4738 is in state STARTED 2026-03-17 00:48:42.370537 | orchestrator | 2026-03-17 00:48:42 | INFO  | Task 2a4fa572-deff-41c7-947d-fd4219b07b6f is in state STARTED 2026-03-17 00:48:42.370957 | orchestrator | 2026-03-17 00:48:42 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:48:42.370997 | orchestrator | 2026-03-17 00:48:42 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:48:45.420997 | orchestrator | 2026-03-17 00:48:45 | INFO  | Task d356b90b-1b77-4171-8013-c1d13155f73b is in state STARTED 2026-03-17 00:48:45.421095 | orchestrator | 2026-03-17 00:48:45 | INFO  | Task 7426541a-3547-4ea0-8750-36668f0e56fe is in state STARTED 2026-03-17 00:48:45.421111 | orchestrator | 2026-03-17 00:48:45 | INFO  | Task 656b6a6c-9640-45a5-aa28-d0d414ad8a4e is in state STARTED 2026-03-17 00:48:45.421123 | orchestrator | 2026-03-17 00:48:45 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:48:45.421134 | orchestrator | 2026-03-17 00:48:45 | INFO  | Task 47db3a7b-ff56-4640-83e3-1305413e4738 is in state STARTED 2026-03-17 00:48:45.421145 | orchestrator | 2026-03-17 00:48:45 | INFO  | Task 2a4fa572-deff-41c7-947d-fd4219b07b6f is in state STARTED 2026-03-17 00:48:45.421157 | orchestrator | 2026-03-17 00:48:45 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:48:45.421168 | orchestrator | 2026-03-17 00:48:45 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:48:48.567171 | orchestrator | 2026-03-17 00:48:48 | INFO  | Task d356b90b-1b77-4171-8013-c1d13155f73b is in state STARTED 2026-03-17 00:48:48.567245 | orchestrator | 2026-03-17 00:48:48 | INFO  | Task 7426541a-3547-4ea0-8750-36668f0e56fe is in state STARTED 2026-03-17 00:48:48.567252 | orchestrator | 2026-03-17 00:48:48 | INFO  | Task 656b6a6c-9640-45a5-aa28-d0d414ad8a4e is in state STARTED 2026-03-17 00:48:48.568084 | orchestrator | 2026-03-17 00:48:48 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:48:48.568351 | orchestrator | 2026-03-17 00:48:48 | INFO  | Task 47db3a7b-ff56-4640-83e3-1305413e4738 is in state STARTED 2026-03-17 00:48:48.571940 | orchestrator | 2026-03-17 00:48:48 | INFO  | Task 2a4fa572-deff-41c7-947d-fd4219b07b6f is in state STARTED 2026-03-17 00:48:48.576344 | orchestrator | 2026-03-17 00:48:48 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:48:48.576433 | orchestrator | 2026-03-17 00:48:48 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:48:51.612863 | orchestrator | 2026-03-17 00:48:51 | INFO  | Task d356b90b-1b77-4171-8013-c1d13155f73b is in state STARTED 2026-03-17 00:48:51.612930 | orchestrator | 2026-03-17 00:48:51 | INFO  | Task 7426541a-3547-4ea0-8750-36668f0e56fe is in state STARTED 2026-03-17 00:48:51.612936 | orchestrator | 2026-03-17 00:48:51 | INFO  | Task 656b6a6c-9640-45a5-aa28-d0d414ad8a4e is in state STARTED 2026-03-17 00:48:51.612963 | orchestrator | 2026-03-17 00:48:51 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:48:51.612969 | orchestrator | 2026-03-17 00:48:51 | INFO  | Task 47db3a7b-ff56-4640-83e3-1305413e4738 is in state STARTED 2026-03-17 00:48:51.612978 | orchestrator | 2026-03-17 00:48:51 | INFO  | Task 2a4fa572-deff-41c7-947d-fd4219b07b6f is in state STARTED 2026-03-17 00:48:51.618715 | orchestrator | 2026-03-17 00:48:51 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:48:51.618813 | orchestrator | 2026-03-17 00:48:51 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:48:54.753053 | orchestrator | 2026-03-17 00:48:54 | INFO  | Task d356b90b-1b77-4171-8013-c1d13155f73b is in state STARTED 2026-03-17 00:48:54.753593 | orchestrator | 2026-03-17 00:48:54 | INFO  | Task 7426541a-3547-4ea0-8750-36668f0e56fe is in state STARTED 2026-03-17 00:48:54.754907 | orchestrator | 2026-03-17 00:48:54 | INFO  | Task 656b6a6c-9640-45a5-aa28-d0d414ad8a4e is in state STARTED 2026-03-17 00:48:54.757027 | orchestrator | 2026-03-17 00:48:54 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:48:54.761000 | orchestrator | 2026-03-17 00:48:54 | INFO  | Task 47db3a7b-ff56-4640-83e3-1305413e4738 is in state STARTED 2026-03-17 00:48:54.763314 | orchestrator | 2026-03-17 00:48:54 | INFO  | Task 2a4fa572-deff-41c7-947d-fd4219b07b6f is in state STARTED 2026-03-17 00:48:54.765549 | orchestrator | 2026-03-17 00:48:54 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:48:54.767387 | orchestrator | 2026-03-17 00:48:54 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:48:57.829363 | orchestrator | 2026-03-17 00:48:57 | INFO  | Task d356b90b-1b77-4171-8013-c1d13155f73b is in state STARTED 2026-03-17 00:48:57.848710 | orchestrator | 2026-03-17 00:48:57 | INFO  | Task 7426541a-3547-4ea0-8750-36668f0e56fe is in state STARTED 2026-03-17 00:48:57.848779 | orchestrator | 2026-03-17 00:48:57 | INFO  | Task 656b6a6c-9640-45a5-aa28-d0d414ad8a4e is in state STARTED 2026-03-17 00:48:57.848785 | orchestrator | 2026-03-17 00:48:57 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:48:57.848790 | orchestrator | 2026-03-17 00:48:57 | INFO  | Task 47db3a7b-ff56-4640-83e3-1305413e4738 is in state STARTED 2026-03-17 00:48:57.848794 | orchestrator | 2026-03-17 00:48:57 | INFO  | Task 2a4fa572-deff-41c7-947d-fd4219b07b6f is in state STARTED 2026-03-17 00:48:57.848799 | orchestrator | 2026-03-17 00:48:57 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:48:57.848803 | orchestrator | 2026-03-17 00:48:57 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:49:01.112217 | orchestrator | 2026-03-17 00:49:01.112335 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2026-03-17 00:49:01.112349 | orchestrator | 2026-03-17 00:49:01.112357 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2026-03-17 00:49:01.112373 | orchestrator | Tuesday 17 March 2026 00:48:45 +0000 (0:00:01.359) 0:00:01.359 ********* 2026-03-17 00:49:01.112399 | orchestrator | changed: [testbed-manager] 2026-03-17 00:49:01.112407 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:49:01.112413 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:49:01.112420 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:49:01.112427 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:49:01.112434 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:49:01.112441 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:49:01.112447 | orchestrator | 2026-03-17 00:49:01.112453 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2026-03-17 00:49:01.112459 | orchestrator | Tuesday 17 March 2026 00:48:49 +0000 (0:00:04.409) 0:00:05.769 ********* 2026-03-17 00:49:01.112465 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-03-17 00:49:01.112471 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-03-17 00:49:01.112477 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-03-17 00:49:01.112482 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-03-17 00:49:01.112488 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-03-17 00:49:01.112494 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-03-17 00:49:01.112499 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-03-17 00:49:01.112505 | orchestrator | 2026-03-17 00:49:01.112511 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2026-03-17 00:49:01.112518 | orchestrator | Tuesday 17 March 2026 00:48:51 +0000 (0:00:01.993) 0:00:07.762 ********* 2026-03-17 00:49:01.112528 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-17 00:48:50.714928', 'end': '2026-03-17 00:48:51.727362', 'delta': '0:00:01.012434', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-17 00:49:01.112540 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-17 00:48:51.102078', 'end': '2026-03-17 00:48:51.111688', 'delta': '0:00:00.009610', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-17 00:49:01.112546 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-17 00:48:50.423686', 'end': '2026-03-17 00:48:50.428192', 'delta': '0:00:00.004506', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-17 00:49:01.112590 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-17 00:48:51.493218', 'end': '2026-03-17 00:48:51.498777', 'delta': '0:00:00.005559', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-17 00:49:01.112599 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-17 00:48:51.510602', 'end': '2026-03-17 00:48:51.518324', 'delta': '0:00:00.007722', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-17 00:49:01.112606 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-17 00:48:51.786983', 'end': '2026-03-17 00:48:51.796488', 'delta': '0:00:00.009505', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-17 00:49:01.112612 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-17 00:48:51.643148', 'end': '2026-03-17 00:48:51.650409', 'delta': '0:00:00.007261', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-17 00:49:01.112619 | orchestrator | 2026-03-17 00:49:01.112625 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2026-03-17 00:49:01.112632 | orchestrator | Tuesday 17 March 2026 00:48:53 +0000 (0:00:01.552) 0:00:09.314 ********* 2026-03-17 00:49:01.112686 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-03-17 00:49:01.112694 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-03-17 00:49:01.112701 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-03-17 00:49:01.112708 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-03-17 00:49:01.112722 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-03-17 00:49:01.112729 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-03-17 00:49:01.112735 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-03-17 00:49:01.112742 | orchestrator | 2026-03-17 00:49:01.112747 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2026-03-17 00:49:01.112753 | orchestrator | Tuesday 17 March 2026 00:48:57 +0000 (0:00:03.703) 0:00:13.018 ********* 2026-03-17 00:49:01.112759 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2026-03-17 00:49:01.112765 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2026-03-17 00:49:01.112772 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2026-03-17 00:49:01.112778 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2026-03-17 00:49:01.112785 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2026-03-17 00:49:01.112791 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2026-03-17 00:49:01.112798 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2026-03-17 00:49:01.112804 | orchestrator | 2026-03-17 00:49:01.112811 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:49:01.112827 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:49:01.112841 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:49:01.112849 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:49:01.112856 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:49:01.112864 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:49:01.112870 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:49:01.112877 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:49:01.112883 | orchestrator | 2026-03-17 00:49:01.112890 | orchestrator | 2026-03-17 00:49:01.112897 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:49:01.112904 | orchestrator | Tuesday 17 March 2026 00:48:59 +0000 (0:00:02.714) 0:00:15.733 ********* 2026-03-17 00:49:01.112911 | orchestrator | =============================================================================== 2026-03-17 00:49:01.112917 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 4.41s 2026-03-17 00:49:01.112924 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 3.70s 2026-03-17 00:49:01.112930 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 2.71s 2026-03-17 00:49:01.112936 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 1.99s 2026-03-17 00:49:01.112942 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 1.55s 2026-03-17 00:49:01.112948 | orchestrator | 2026-03-17 00:49:00 | INFO  | Task d356b90b-1b77-4171-8013-c1d13155f73b is in state SUCCESS 2026-03-17 00:49:01.112956 | orchestrator | 2026-03-17 00:49:00 | INFO  | Task 7426541a-3547-4ea0-8750-36668f0e56fe is in state STARTED 2026-03-17 00:49:01.112963 | orchestrator | 2026-03-17 00:49:00 | INFO  | Task 656b6a6c-9640-45a5-aa28-d0d414ad8a4e is in state STARTED 2026-03-17 00:49:01.112970 | orchestrator | 2026-03-17 00:49:00 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:49:01.112977 | orchestrator | 2026-03-17 00:49:00 | INFO  | Task 47db3a7b-ff56-4640-83e3-1305413e4738 is in state STARTED 2026-03-17 00:49:01.112989 | orchestrator | 2026-03-17 00:49:00 | INFO  | Task 2a4fa572-deff-41c7-947d-fd4219b07b6f is in state STARTED 2026-03-17 00:49:01.112996 | orchestrator | 2026-03-17 00:49:00 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:49:01.113003 | orchestrator | 2026-03-17 00:49:00 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:49:03.973881 | orchestrator | 2026-03-17 00:49:03 | INFO  | Task 7426541a-3547-4ea0-8750-36668f0e56fe is in state STARTED 2026-03-17 00:49:03.976514 | orchestrator | 2026-03-17 00:49:03 | INFO  | Task 656b6a6c-9640-45a5-aa28-d0d414ad8a4e is in state STARTED 2026-03-17 00:49:03.979366 | orchestrator | 2026-03-17 00:49:03 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:49:03.981055 | orchestrator | 2026-03-17 00:49:03 | INFO  | Task 47db3a7b-ff56-4640-83e3-1305413e4738 is in state STARTED 2026-03-17 00:49:03.984446 | orchestrator | 2026-03-17 00:49:03 | INFO  | Task 323a95f3-9f01-4f19-98fd-8b55b9f86b34 is in state STARTED 2026-03-17 00:49:03.987561 | orchestrator | 2026-03-17 00:49:03 | INFO  | Task 2a4fa572-deff-41c7-947d-fd4219b07b6f is in state STARTED 2026-03-17 00:49:03.990252 | orchestrator | 2026-03-17 00:49:03 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:49:03.990302 | orchestrator | 2026-03-17 00:49:03 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:49:07.031028 | orchestrator | 2026-03-17 00:49:07 | INFO  | Task 7426541a-3547-4ea0-8750-36668f0e56fe is in state STARTED 2026-03-17 00:49:07.031407 | orchestrator | 2026-03-17 00:49:07 | INFO  | Task 656b6a6c-9640-45a5-aa28-d0d414ad8a4e is in state STARTED 2026-03-17 00:49:07.032049 | orchestrator | 2026-03-17 00:49:07 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:49:07.032752 | orchestrator | 2026-03-17 00:49:07 | INFO  | Task 47db3a7b-ff56-4640-83e3-1305413e4738 is in state STARTED 2026-03-17 00:49:07.033885 | orchestrator | 2026-03-17 00:49:07 | INFO  | Task 323a95f3-9f01-4f19-98fd-8b55b9f86b34 is in state STARTED 2026-03-17 00:49:07.034198 | orchestrator | 2026-03-17 00:49:07 | INFO  | Task 2a4fa572-deff-41c7-947d-fd4219b07b6f is in state STARTED 2026-03-17 00:49:07.039099 | orchestrator | 2026-03-17 00:49:07 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:49:07.039259 | orchestrator | 2026-03-17 00:49:07 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:49:10.167471 | orchestrator | 2026-03-17 00:49:10 | INFO  | Task 7426541a-3547-4ea0-8750-36668f0e56fe is in state STARTED 2026-03-17 00:49:10.169633 | orchestrator | 2026-03-17 00:49:10 | INFO  | Task 656b6a6c-9640-45a5-aa28-d0d414ad8a4e is in state STARTED 2026-03-17 00:49:10.172248 | orchestrator | 2026-03-17 00:49:10 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:49:10.174460 | orchestrator | 2026-03-17 00:49:10 | INFO  | Task 47db3a7b-ff56-4640-83e3-1305413e4738 is in state STARTED 2026-03-17 00:49:10.176324 | orchestrator | 2026-03-17 00:49:10 | INFO  | Task 323a95f3-9f01-4f19-98fd-8b55b9f86b34 is in state STARTED 2026-03-17 00:49:10.178229 | orchestrator | 2026-03-17 00:49:10 | INFO  | Task 2a4fa572-deff-41c7-947d-fd4219b07b6f is in state STARTED 2026-03-17 00:49:10.180862 | orchestrator | 2026-03-17 00:49:10 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:49:10.180919 | orchestrator | 2026-03-17 00:49:10 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:49:13.342139 | orchestrator | 2026-03-17 00:49:13 | INFO  | Task 7426541a-3547-4ea0-8750-36668f0e56fe is in state STARTED 2026-03-17 00:49:13.342292 | orchestrator | 2026-03-17 00:49:13 | INFO  | Task 656b6a6c-9640-45a5-aa28-d0d414ad8a4e is in state STARTED 2026-03-17 00:49:13.342310 | orchestrator | 2026-03-17 00:49:13 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:49:13.342322 | orchestrator | 2026-03-17 00:49:13 | INFO  | Task 47db3a7b-ff56-4640-83e3-1305413e4738 is in state STARTED 2026-03-17 00:49:13.342333 | orchestrator | 2026-03-17 00:49:13 | INFO  | Task 323a95f3-9f01-4f19-98fd-8b55b9f86b34 is in state STARTED 2026-03-17 00:49:13.342344 | orchestrator | 2026-03-17 00:49:13 | INFO  | Task 2a4fa572-deff-41c7-947d-fd4219b07b6f is in state STARTED 2026-03-17 00:49:13.342355 | orchestrator | 2026-03-17 00:49:13 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:49:13.342366 | orchestrator | 2026-03-17 00:49:13 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:49:16.286950 | orchestrator | 2026-03-17 00:49:16 | INFO  | Task 7426541a-3547-4ea0-8750-36668f0e56fe is in state STARTED 2026-03-17 00:49:16.287146 | orchestrator | 2026-03-17 00:49:16 | INFO  | Task 656b6a6c-9640-45a5-aa28-d0d414ad8a4e is in state STARTED 2026-03-17 00:49:16.291126 | orchestrator | 2026-03-17 00:49:16 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:49:16.291820 | orchestrator | 2026-03-17 00:49:16 | INFO  | Task 47db3a7b-ff56-4640-83e3-1305413e4738 is in state STARTED 2026-03-17 00:49:16.292436 | orchestrator | 2026-03-17 00:49:16 | INFO  | Task 323a95f3-9f01-4f19-98fd-8b55b9f86b34 is in state STARTED 2026-03-17 00:49:16.295246 | orchestrator | 2026-03-17 00:49:16 | INFO  | Task 2a4fa572-deff-41c7-947d-fd4219b07b6f is in state STARTED 2026-03-17 00:49:16.296177 | orchestrator | 2026-03-17 00:49:16 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:49:16.296209 | orchestrator | 2026-03-17 00:49:16 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:49:19.656077 | orchestrator | 2026-03-17 00:49:19 | INFO  | Task 7426541a-3547-4ea0-8750-36668f0e56fe is in state STARTED 2026-03-17 00:49:19.656170 | orchestrator | 2026-03-17 00:49:19 | INFO  | Task 656b6a6c-9640-45a5-aa28-d0d414ad8a4e is in state STARTED 2026-03-17 00:49:19.656181 | orchestrator | 2026-03-17 00:49:19 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:49:19.656188 | orchestrator | 2026-03-17 00:49:19 | INFO  | Task 47db3a7b-ff56-4640-83e3-1305413e4738 is in state STARTED 2026-03-17 00:49:19.656195 | orchestrator | 2026-03-17 00:49:19 | INFO  | Task 323a95f3-9f01-4f19-98fd-8b55b9f86b34 is in state STARTED 2026-03-17 00:49:19.656202 | orchestrator | 2026-03-17 00:49:19 | INFO  | Task 2a4fa572-deff-41c7-947d-fd4219b07b6f is in state STARTED 2026-03-17 00:49:19.656207 | orchestrator | 2026-03-17 00:49:19 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:49:19.656213 | orchestrator | 2026-03-17 00:49:19 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:49:22.744933 | orchestrator | 2026-03-17 00:49:22 | INFO  | Task 7426541a-3547-4ea0-8750-36668f0e56fe is in state STARTED 2026-03-17 00:49:22.745008 | orchestrator | 2026-03-17 00:49:22 | INFO  | Task 656b6a6c-9640-45a5-aa28-d0d414ad8a4e is in state STARTED 2026-03-17 00:49:22.745017 | orchestrator | 2026-03-17 00:49:22 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:49:22.745023 | orchestrator | 2026-03-17 00:49:22 | INFO  | Task 47db3a7b-ff56-4640-83e3-1305413e4738 is in state SUCCESS 2026-03-17 00:49:22.745030 | orchestrator | 2026-03-17 00:49:22 | INFO  | Task 323a95f3-9f01-4f19-98fd-8b55b9f86b34 is in state STARTED 2026-03-17 00:49:22.745058 | orchestrator | 2026-03-17 00:49:22 | INFO  | Task 2a4fa572-deff-41c7-947d-fd4219b07b6f is in state STARTED 2026-03-17 00:49:22.745064 | orchestrator | 2026-03-17 00:49:22 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:49:22.745070 | orchestrator | 2026-03-17 00:49:22 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:49:25.733296 | orchestrator | 2026-03-17 00:49:25 | INFO  | Task 7426541a-3547-4ea0-8750-36668f0e56fe is in state STARTED 2026-03-17 00:49:25.733626 | orchestrator | 2026-03-17 00:49:25 | INFO  | Task 656b6a6c-9640-45a5-aa28-d0d414ad8a4e is in state STARTED 2026-03-17 00:49:25.734324 | orchestrator | 2026-03-17 00:49:25 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:49:25.735982 | orchestrator | 2026-03-17 00:49:25 | INFO  | Task 323a95f3-9f01-4f19-98fd-8b55b9f86b34 is in state STARTED 2026-03-17 00:49:25.736524 | orchestrator | 2026-03-17 00:49:25 | INFO  | Task 2a4fa572-deff-41c7-947d-fd4219b07b6f is in state STARTED 2026-03-17 00:49:25.737268 | orchestrator | 2026-03-17 00:49:25 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:49:25.737285 | orchestrator | 2026-03-17 00:49:25 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:49:28.771512 | orchestrator | 2026-03-17 00:49:28 | INFO  | Task 7426541a-3547-4ea0-8750-36668f0e56fe is in state STARTED 2026-03-17 00:49:28.771951 | orchestrator | 2026-03-17 00:49:28 | INFO  | Task 656b6a6c-9640-45a5-aa28-d0d414ad8a4e is in state STARTED 2026-03-17 00:49:28.773588 | orchestrator | 2026-03-17 00:49:28 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:49:28.774231 | orchestrator | 2026-03-17 00:49:28 | INFO  | Task 323a95f3-9f01-4f19-98fd-8b55b9f86b34 is in state STARTED 2026-03-17 00:49:28.775283 | orchestrator | 2026-03-17 00:49:28 | INFO  | Task 2a4fa572-deff-41c7-947d-fd4219b07b6f is in state STARTED 2026-03-17 00:49:28.777009 | orchestrator | 2026-03-17 00:49:28 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:49:28.777060 | orchestrator | 2026-03-17 00:49:28 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:49:31.815001 | orchestrator | 2026-03-17 00:49:31 | INFO  | Task 7426541a-3547-4ea0-8750-36668f0e56fe is in state SUCCESS 2026-03-17 00:49:31.816070 | orchestrator | 2026-03-17 00:49:31 | INFO  | Task 656b6a6c-9640-45a5-aa28-d0d414ad8a4e is in state STARTED 2026-03-17 00:49:31.817148 | orchestrator | 2026-03-17 00:49:31 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:49:31.819013 | orchestrator | 2026-03-17 00:49:31 | INFO  | Task 323a95f3-9f01-4f19-98fd-8b55b9f86b34 is in state STARTED 2026-03-17 00:49:31.820198 | orchestrator | 2026-03-17 00:49:31 | INFO  | Task 2a4fa572-deff-41c7-947d-fd4219b07b6f is in state STARTED 2026-03-17 00:49:31.821325 | orchestrator | 2026-03-17 00:49:31 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:49:31.821361 | orchestrator | 2026-03-17 00:49:31 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:49:34.869375 | orchestrator | 2026-03-17 00:49:34 | INFO  | Task 656b6a6c-9640-45a5-aa28-d0d414ad8a4e is in state STARTED 2026-03-17 00:49:34.869547 | orchestrator | 2026-03-17 00:49:34 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:49:34.869575 | orchestrator | 2026-03-17 00:49:34 | INFO  | Task 323a95f3-9f01-4f19-98fd-8b55b9f86b34 is in state STARTED 2026-03-17 00:49:34.875192 | orchestrator | 2026-03-17 00:49:34 | INFO  | Task 2a4fa572-deff-41c7-947d-fd4219b07b6f is in state STARTED 2026-03-17 00:49:34.875957 | orchestrator | 2026-03-17 00:49:34 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:49:34.875987 | orchestrator | 2026-03-17 00:49:34 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:49:37.912566 | orchestrator | 2026-03-17 00:49:37 | INFO  | Task 656b6a6c-9640-45a5-aa28-d0d414ad8a4e is in state STARTED 2026-03-17 00:49:37.912644 | orchestrator | 2026-03-17 00:49:37 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:49:37.912650 | orchestrator | 2026-03-17 00:49:37 | INFO  | Task 323a95f3-9f01-4f19-98fd-8b55b9f86b34 is in state STARTED 2026-03-17 00:49:37.914489 | orchestrator | 2026-03-17 00:49:37 | INFO  | Task 2a4fa572-deff-41c7-947d-fd4219b07b6f is in state STARTED 2026-03-17 00:49:37.914571 | orchestrator | 2026-03-17 00:49:37 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:49:37.914581 | orchestrator | 2026-03-17 00:49:37 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:49:41.040585 | orchestrator | 2026-03-17 00:49:41 | INFO  | Task 656b6a6c-9640-45a5-aa28-d0d414ad8a4e is in state STARTED 2026-03-17 00:49:41.040665 | orchestrator | 2026-03-17 00:49:41 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:49:41.040677 | orchestrator | 2026-03-17 00:49:41 | INFO  | Task 323a95f3-9f01-4f19-98fd-8b55b9f86b34 is in state STARTED 2026-03-17 00:49:41.040684 | orchestrator | 2026-03-17 00:49:41 | INFO  | Task 2a4fa572-deff-41c7-947d-fd4219b07b6f is in state STARTED 2026-03-17 00:49:41.040691 | orchestrator | 2026-03-17 00:49:41 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:49:41.040698 | orchestrator | 2026-03-17 00:49:41 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:49:44.101834 | orchestrator | 2026-03-17 00:49:44 | INFO  | Task 656b6a6c-9640-45a5-aa28-d0d414ad8a4e is in state STARTED 2026-03-17 00:49:44.103334 | orchestrator | 2026-03-17 00:49:44 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:49:44.103391 | orchestrator | 2026-03-17 00:49:44 | INFO  | Task 323a95f3-9f01-4f19-98fd-8b55b9f86b34 is in state STARTED 2026-03-17 00:49:44.104953 | orchestrator | 2026-03-17 00:49:44 | INFO  | Task 2a4fa572-deff-41c7-947d-fd4219b07b6f is in state STARTED 2026-03-17 00:49:44.106578 | orchestrator | 2026-03-17 00:49:44 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:49:44.106615 | orchestrator | 2026-03-17 00:49:44 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:49:47.192850 | orchestrator | 2026-03-17 00:49:47 | INFO  | Task 656b6a6c-9640-45a5-aa28-d0d414ad8a4e is in state STARTED 2026-03-17 00:49:47.192906 | orchestrator | 2026-03-17 00:49:47 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:49:47.196830 | orchestrator | 2026-03-17 00:49:47 | INFO  | Task 323a95f3-9f01-4f19-98fd-8b55b9f86b34 is in state STARTED 2026-03-17 00:49:47.197356 | orchestrator | 2026-03-17 00:49:47 | INFO  | Task 2a4fa572-deff-41c7-947d-fd4219b07b6f is in state STARTED 2026-03-17 00:49:47.198620 | orchestrator | 2026-03-17 00:49:47 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:49:47.198663 | orchestrator | 2026-03-17 00:49:47 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:49:50.290864 | orchestrator | 2026-03-17 00:49:50 | INFO  | Task 656b6a6c-9640-45a5-aa28-d0d414ad8a4e is in state STARTED 2026-03-17 00:49:50.291945 | orchestrator | 2026-03-17 00:49:50 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:49:50.293009 | orchestrator | 2026-03-17 00:49:50 | INFO  | Task 323a95f3-9f01-4f19-98fd-8b55b9f86b34 is in state STARTED 2026-03-17 00:49:50.294132 | orchestrator | 2026-03-17 00:49:50 | INFO  | Task 2a4fa572-deff-41c7-947d-fd4219b07b6f is in state STARTED 2026-03-17 00:49:50.295581 | orchestrator | 2026-03-17 00:49:50 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:49:50.295603 | orchestrator | 2026-03-17 00:49:50 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:49:53.341964 | orchestrator | 2026-03-17 00:49:53 | INFO  | Task 656b6a6c-9640-45a5-aa28-d0d414ad8a4e is in state STARTED 2026-03-17 00:49:53.342046 | orchestrator | 2026-03-17 00:49:53 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:49:53.342792 | orchestrator | 2026-03-17 00:49:53 | INFO  | Task 323a95f3-9f01-4f19-98fd-8b55b9f86b34 is in state STARTED 2026-03-17 00:49:53.343371 | orchestrator | 2026-03-17 00:49:53 | INFO  | Task 2a4fa572-deff-41c7-947d-fd4219b07b6f is in state STARTED 2026-03-17 00:49:53.344554 | orchestrator | 2026-03-17 00:49:53 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:49:53.344592 | orchestrator | 2026-03-17 00:49:53 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:49:56.644388 | orchestrator | 2026-03-17 00:49:56 | INFO  | Task 656b6a6c-9640-45a5-aa28-d0d414ad8a4e is in state STARTED 2026-03-17 00:49:56.645795 | orchestrator | 2026-03-17 00:49:56 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:49:56.646543 | orchestrator | 2026-03-17 00:49:56 | INFO  | Task 323a95f3-9f01-4f19-98fd-8b55b9f86b34 is in state STARTED 2026-03-17 00:49:56.648405 | orchestrator | 2026-03-17 00:49:56 | INFO  | Task 2a4fa572-deff-41c7-947d-fd4219b07b6f is in state STARTED 2026-03-17 00:49:56.650492 | orchestrator | 2026-03-17 00:49:56 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:49:56.650522 | orchestrator | 2026-03-17 00:49:56 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:49:59.836818 | orchestrator | 2026-03-17 00:49:59 | INFO  | Task 656b6a6c-9640-45a5-aa28-d0d414ad8a4e is in state STARTED 2026-03-17 00:49:59.840831 | orchestrator | 2026-03-17 00:49:59 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:49:59.845001 | orchestrator | 2026-03-17 00:49:59 | INFO  | Task 323a95f3-9f01-4f19-98fd-8b55b9f86b34 is in state STARTED 2026-03-17 00:49:59.849828 | orchestrator | 2026-03-17 00:49:59 | INFO  | Task 2a4fa572-deff-41c7-947d-fd4219b07b6f is in state STARTED 2026-03-17 00:49:59.849872 | orchestrator | 2026-03-17 00:49:59 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:49:59.849878 | orchestrator | 2026-03-17 00:49:59 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:50:02.910181 | orchestrator | 2026-03-17 00:50:02 | INFO  | Task 656b6a6c-9640-45a5-aa28-d0d414ad8a4e is in state STARTED 2026-03-17 00:50:02.913025 | orchestrator | 2026-03-17 00:50:02 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:50:02.914476 | orchestrator | 2026-03-17 00:50:02 | INFO  | Task 323a95f3-9f01-4f19-98fd-8b55b9f86b34 is in state STARTED 2026-03-17 00:50:02.918616 | orchestrator | 2026-03-17 00:50:02 | INFO  | Task 2a4fa572-deff-41c7-947d-fd4219b07b6f is in state STARTED 2026-03-17 00:50:02.920282 | orchestrator | 2026-03-17 00:50:02 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:50:02.920394 | orchestrator | 2026-03-17 00:50:02 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:50:05.968510 | orchestrator | 2026-03-17 00:50:05 | INFO  | Task 656b6a6c-9640-45a5-aa28-d0d414ad8a4e is in state STARTED 2026-03-17 00:50:05.970647 | orchestrator | 2026-03-17 00:50:05 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:50:05.975076 | orchestrator | 2026-03-17 00:50:05 | INFO  | Task 323a95f3-9f01-4f19-98fd-8b55b9f86b34 is in state STARTED 2026-03-17 00:50:05.976414 | orchestrator | 2026-03-17 00:50:05 | INFO  | Task 2a4fa572-deff-41c7-947d-fd4219b07b6f is in state STARTED 2026-03-17 00:50:05.978081 | orchestrator | 2026-03-17 00:50:05 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:50:05.978118 | orchestrator | 2026-03-17 00:50:05 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:50:09.043770 | orchestrator | 2026-03-17 00:50:09 | INFO  | Task 656b6a6c-9640-45a5-aa28-d0d414ad8a4e is in state STARTED 2026-03-17 00:50:09.048497 | orchestrator | 2026-03-17 00:50:09 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:50:09.051726 | orchestrator | 2026-03-17 00:50:09 | INFO  | Task 323a95f3-9f01-4f19-98fd-8b55b9f86b34 is in state STARTED 2026-03-17 00:50:09.056055 | orchestrator | 2026-03-17 00:50:09 | INFO  | Task 2a4fa572-deff-41c7-947d-fd4219b07b6f is in state STARTED 2026-03-17 00:50:09.057848 | orchestrator | 2026-03-17 00:50:09 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:50:09.057940 | orchestrator | 2026-03-17 00:50:09 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:50:12.111780 | orchestrator | 2026-03-17 00:50:12 | INFO  | Task 656b6a6c-9640-45a5-aa28-d0d414ad8a4e is in state STARTED 2026-03-17 00:50:12.112890 | orchestrator | 2026-03-17 00:50:12 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:50:12.114117 | orchestrator | 2026-03-17 00:50:12 | INFO  | Task 323a95f3-9f01-4f19-98fd-8b55b9f86b34 is in state STARTED 2026-03-17 00:50:12.115417 | orchestrator | 2026-03-17 00:50:12 | INFO  | Task 2a4fa572-deff-41c7-947d-fd4219b07b6f is in state STARTED 2026-03-17 00:50:12.117942 | orchestrator | 2026-03-17 00:50:12 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:50:12.117978 | orchestrator | 2026-03-17 00:50:12 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:50:15.169140 | orchestrator | 2026-03-17 00:50:15 | INFO  | Task 656b6a6c-9640-45a5-aa28-d0d414ad8a4e is in state STARTED 2026-03-17 00:50:15.172781 | orchestrator | 2026-03-17 00:50:15 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:50:15.174879 | orchestrator | 2026-03-17 00:50:15 | INFO  | Task 323a95f3-9f01-4f19-98fd-8b55b9f86b34 is in state STARTED 2026-03-17 00:50:15.178739 | orchestrator | 2026-03-17 00:50:15 | INFO  | Task 2a4fa572-deff-41c7-947d-fd4219b07b6f is in state STARTED 2026-03-17 00:50:15.181588 | orchestrator | 2026-03-17 00:50:15 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:50:15.181642 | orchestrator | 2026-03-17 00:50:15 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:50:18.230993 | orchestrator | 2026-03-17 00:50:18 | INFO  | Task 656b6a6c-9640-45a5-aa28-d0d414ad8a4e is in state SUCCESS 2026-03-17 00:50:18.232667 | orchestrator | 2026-03-17 00:50:18.232717 | orchestrator | 2026-03-17 00:50:18.232726 | orchestrator | PLAY [Apply role homer] ******************************************************** 2026-03-17 00:50:18.232734 | orchestrator | 2026-03-17 00:50:18.232740 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2026-03-17 00:50:18.232747 | orchestrator | Tuesday 17 March 2026 00:48:45 +0000 (0:00:00.637) 0:00:00.637 ********* 2026-03-17 00:50:18.232754 | orchestrator | ok: [testbed-manager] => { 2026-03-17 00:50:18.232774 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2026-03-17 00:50:18.232782 | orchestrator | } 2026-03-17 00:50:18.232789 | orchestrator | 2026-03-17 00:50:18.232796 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2026-03-17 00:50:18.232815 | orchestrator | Tuesday 17 March 2026 00:48:45 +0000 (0:00:00.794) 0:00:01.432 ********* 2026-03-17 00:50:18.232825 | orchestrator | ok: [testbed-manager] 2026-03-17 00:50:18.232832 | orchestrator | 2026-03-17 00:50:18.232838 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2026-03-17 00:50:18.232845 | orchestrator | Tuesday 17 March 2026 00:48:48 +0000 (0:00:02.656) 0:00:04.088 ********* 2026-03-17 00:50:18.232851 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2026-03-17 00:50:18.232858 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2026-03-17 00:50:18.232864 | orchestrator | 2026-03-17 00:50:18.232870 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2026-03-17 00:50:18.232877 | orchestrator | Tuesday 17 March 2026 00:48:49 +0000 (0:00:01.274) 0:00:05.363 ********* 2026-03-17 00:50:18.232883 | orchestrator | changed: [testbed-manager] 2026-03-17 00:50:18.232889 | orchestrator | 2026-03-17 00:50:18.232895 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2026-03-17 00:50:18.232901 | orchestrator | Tuesday 17 March 2026 00:48:52 +0000 (0:00:02.494) 0:00:07.858 ********* 2026-03-17 00:50:18.232907 | orchestrator | changed: [testbed-manager] 2026-03-17 00:50:18.232914 | orchestrator | 2026-03-17 00:50:18.232920 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2026-03-17 00:50:18.232926 | orchestrator | Tuesday 17 March 2026 00:48:54 +0000 (0:00:01.721) 0:00:09.580 ********* 2026-03-17 00:50:18.232932 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2026-03-17 00:50:18.232939 | orchestrator | ok: [testbed-manager] 2026-03-17 00:50:18.232945 | orchestrator | 2026-03-17 00:50:18.232952 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2026-03-17 00:50:18.232958 | orchestrator | Tuesday 17 March 2026 00:49:20 +0000 (0:00:26.098) 0:00:35.678 ********* 2026-03-17 00:50:18.232965 | orchestrator | changed: [testbed-manager] 2026-03-17 00:50:18.232971 | orchestrator | 2026-03-17 00:50:18.232977 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:50:18.232984 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:50:18.232991 | orchestrator | 2026-03-17 00:50:18.232997 | orchestrator | 2026-03-17 00:50:18.233004 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:50:18.233010 | orchestrator | Tuesday 17 March 2026 00:49:22 +0000 (0:00:01.804) 0:00:37.483 ********* 2026-03-17 00:50:18.233015 | orchestrator | =============================================================================== 2026-03-17 00:50:18.233022 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 26.10s 2026-03-17 00:50:18.233028 | orchestrator | osism.services.homer : Create traefik external network ------------------ 2.65s 2026-03-17 00:50:18.233034 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.49s 2026-03-17 00:50:18.233041 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 1.80s 2026-03-17 00:50:18.233047 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.72s 2026-03-17 00:50:18.233053 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.27s 2026-03-17 00:50:18.233059 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.80s 2026-03-17 00:50:18.233065 | orchestrator | 2026-03-17 00:50:18.233072 | orchestrator | 2026-03-17 00:50:18.233078 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-03-17 00:50:18.233084 | orchestrator | 2026-03-17 00:50:18.233090 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-03-17 00:50:18.233102 | orchestrator | Tuesday 17 March 2026 00:48:43 +0000 (0:00:00.408) 0:00:00.408 ********* 2026-03-17 00:50:18.233109 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-03-17 00:50:18.233116 | orchestrator | 2026-03-17 00:50:18.233122 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-03-17 00:50:18.233129 | orchestrator | Tuesday 17 March 2026 00:48:44 +0000 (0:00:00.681) 0:00:01.090 ********* 2026-03-17 00:50:18.233135 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-03-17 00:50:18.233141 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-03-17 00:50:18.233147 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-03-17 00:50:18.233154 | orchestrator | 2026-03-17 00:50:18.233160 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-03-17 00:50:18.233166 | orchestrator | Tuesday 17 March 2026 00:48:46 +0000 (0:00:01.879) 0:00:02.970 ********* 2026-03-17 00:50:18.233173 | orchestrator | changed: [testbed-manager] 2026-03-17 00:50:18.233179 | orchestrator | 2026-03-17 00:50:18.233185 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-03-17 00:50:18.233192 | orchestrator | Tuesday 17 March 2026 00:48:48 +0000 (0:00:01.855) 0:00:04.825 ********* 2026-03-17 00:50:18.233208 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-03-17 00:50:18.233214 | orchestrator | ok: [testbed-manager] 2026-03-17 00:50:18.233229 | orchestrator | 2026-03-17 00:50:18.233236 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-03-17 00:50:18.233248 | orchestrator | Tuesday 17 March 2026 00:49:24 +0000 (0:00:35.817) 0:00:40.642 ********* 2026-03-17 00:50:18.233255 | orchestrator | changed: [testbed-manager] 2026-03-17 00:50:18.233262 | orchestrator | 2026-03-17 00:50:18.233268 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-03-17 00:50:18.233275 | orchestrator | Tuesday 17 March 2026 00:49:25 +0000 (0:00:01.004) 0:00:41.647 ********* 2026-03-17 00:50:18.233282 | orchestrator | ok: [testbed-manager] 2026-03-17 00:50:18.233289 | orchestrator | 2026-03-17 00:50:18.233295 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-03-17 00:50:18.233305 | orchestrator | Tuesday 17 March 2026 00:49:26 +0000 (0:00:00.890) 0:00:42.537 ********* 2026-03-17 00:50:18.233312 | orchestrator | changed: [testbed-manager] 2026-03-17 00:50:18.233319 | orchestrator | 2026-03-17 00:50:18.233325 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-03-17 00:50:18.233332 | orchestrator | Tuesday 17 March 2026 00:49:27 +0000 (0:00:01.873) 0:00:44.411 ********* 2026-03-17 00:50:18.233339 | orchestrator | changed: [testbed-manager] 2026-03-17 00:50:18.233346 | orchestrator | 2026-03-17 00:50:18.233353 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-03-17 00:50:18.233360 | orchestrator | Tuesday 17 March 2026 00:49:28 +0000 (0:00:00.613) 0:00:45.024 ********* 2026-03-17 00:50:18.233366 | orchestrator | changed: [testbed-manager] 2026-03-17 00:50:18.233373 | orchestrator | 2026-03-17 00:50:18.233380 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-03-17 00:50:18.233387 | orchestrator | Tuesday 17 March 2026 00:49:29 +0000 (0:00:00.489) 0:00:45.513 ********* 2026-03-17 00:50:18.233394 | orchestrator | ok: [testbed-manager] 2026-03-17 00:50:18.233400 | orchestrator | 2026-03-17 00:50:18.233407 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:50:18.233414 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:50:18.233421 | orchestrator | 2026-03-17 00:50:18.233427 | orchestrator | 2026-03-17 00:50:18.233434 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:50:18.233441 | orchestrator | Tuesday 17 March 2026 00:49:29 +0000 (0:00:00.361) 0:00:45.875 ********* 2026-03-17 00:50:18.233450 | orchestrator | =============================================================================== 2026-03-17 00:50:18.233457 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 35.82s 2026-03-17 00:50:18.233464 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.88s 2026-03-17 00:50:18.233476 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 1.87s 2026-03-17 00:50:18.233483 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.86s 2026-03-17 00:50:18.233489 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.00s 2026-03-17 00:50:18.233495 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.89s 2026-03-17 00:50:18.233502 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.68s 2026-03-17 00:50:18.233508 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.61s 2026-03-17 00:50:18.233515 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.49s 2026-03-17 00:50:18.233521 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.36s 2026-03-17 00:50:18.233528 | orchestrator | 2026-03-17 00:50:18.233534 | orchestrator | 2026-03-17 00:50:18.233540 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-17 00:50:18.233546 | orchestrator | 2026-03-17 00:50:18.233552 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-17 00:50:18.233559 | orchestrator | Tuesday 17 March 2026 00:48:45 +0000 (0:00:00.598) 0:00:00.598 ********* 2026-03-17 00:50:18.233565 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-03-17 00:50:18.233571 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-03-17 00:50:18.233577 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-03-17 00:50:18.233584 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-03-17 00:50:18.233590 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-03-17 00:50:18.233596 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-03-17 00:50:18.233602 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-03-17 00:50:18.233608 | orchestrator | 2026-03-17 00:50:18.233614 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-03-17 00:50:18.233620 | orchestrator | 2026-03-17 00:50:18.233627 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-03-17 00:50:18.233633 | orchestrator | Tuesday 17 March 2026 00:48:45 +0000 (0:00:00.898) 0:00:01.497 ********* 2026-03-17 00:50:18.233648 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:50:18.233655 | orchestrator | 2026-03-17 00:50:18.233661 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-03-17 00:50:18.233668 | orchestrator | Tuesday 17 March 2026 00:48:47 +0000 (0:00:01.819) 0:00:03.317 ********* 2026-03-17 00:50:18.233674 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:50:18.233680 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:50:18.233687 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:50:18.233693 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:50:18.233699 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:50:18.233711 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:50:18.233717 | orchestrator | ok: [testbed-manager] 2026-03-17 00:50:18.233724 | orchestrator | 2026-03-17 00:50:18.233730 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-03-17 00:50:18.233736 | orchestrator | Tuesday 17 March 2026 00:48:50 +0000 (0:00:02.513) 0:00:05.830 ********* 2026-03-17 00:50:18.233743 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:50:18.233749 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:50:18.233759 | orchestrator | ok: [testbed-manager] 2026-03-17 00:50:18.233766 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:50:18.233772 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:50:18.233778 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:50:18.233785 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:50:18.233791 | orchestrator | 2026-03-17 00:50:18.233797 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-03-17 00:50:18.233851 | orchestrator | Tuesday 17 March 2026 00:48:53 +0000 (0:00:02.783) 0:00:08.614 ********* 2026-03-17 00:50:18.233860 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:50:18.233867 | orchestrator | changed: [testbed-manager] 2026-03-17 00:50:18.233873 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:50:18.233879 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:50:18.233885 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:50:18.233891 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:50:18.233897 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:50:18.233903 | orchestrator | 2026-03-17 00:50:18.233909 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-03-17 00:50:18.233915 | orchestrator | Tuesday 17 March 2026 00:48:55 +0000 (0:00:02.426) 0:00:11.040 ********* 2026-03-17 00:50:18.233921 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:50:18.233927 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:50:18.233933 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:50:18.233939 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:50:18.234066 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:50:18.234074 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:50:18.234081 | orchestrator | changed: [testbed-manager] 2026-03-17 00:50:18.234087 | orchestrator | 2026-03-17 00:50:18.234094 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-03-17 00:50:18.234101 | orchestrator | Tuesday 17 March 2026 00:49:06 +0000 (0:00:11.085) 0:00:22.126 ********* 2026-03-17 00:50:18.234106 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:50:18.234112 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:50:18.234118 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:50:18.234124 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:50:18.234130 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:50:18.234136 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:50:18.234141 | orchestrator | changed: [testbed-manager] 2026-03-17 00:50:18.234147 | orchestrator | 2026-03-17 00:50:18.234162 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-03-17 00:50:18.234174 | orchestrator | Tuesday 17 March 2026 00:49:42 +0000 (0:00:36.184) 0:00:58.310 ********* 2026-03-17 00:50:18.234182 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:50:18.234190 | orchestrator | 2026-03-17 00:50:18.234196 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-03-17 00:50:18.234202 | orchestrator | Tuesday 17 March 2026 00:49:44 +0000 (0:00:01.726) 0:01:00.036 ********* 2026-03-17 00:50:18.234208 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-03-17 00:50:18.234214 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-03-17 00:50:18.234220 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-03-17 00:50:18.234226 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-03-17 00:50:18.234231 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-03-17 00:50:18.234237 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-03-17 00:50:18.234243 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-03-17 00:50:18.234249 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-03-17 00:50:18.234255 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-03-17 00:50:18.234261 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-03-17 00:50:18.234274 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-03-17 00:50:18.234281 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-03-17 00:50:18.234287 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-03-17 00:50:18.234293 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-03-17 00:50:18.234298 | orchestrator | 2026-03-17 00:50:18.234304 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-03-17 00:50:18.234310 | orchestrator | Tuesday 17 March 2026 00:49:50 +0000 (0:00:06.147) 0:01:06.184 ********* 2026-03-17 00:50:18.234316 | orchestrator | ok: [testbed-manager] 2026-03-17 00:50:18.234321 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:50:18.234327 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:50:18.234332 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:50:18.234338 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:50:18.234343 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:50:18.234348 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:50:18.234354 | orchestrator | 2026-03-17 00:50:18.234360 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-03-17 00:50:18.234365 | orchestrator | Tuesday 17 March 2026 00:49:52 +0000 (0:00:01.723) 0:01:07.908 ********* 2026-03-17 00:50:18.234371 | orchestrator | changed: [testbed-manager] 2026-03-17 00:50:18.234376 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:50:18.234382 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:50:18.234387 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:50:18.234393 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:50:18.234398 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:50:18.234404 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:50:18.234409 | orchestrator | 2026-03-17 00:50:18.234415 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-03-17 00:50:18.234427 | orchestrator | Tuesday 17 March 2026 00:49:54 +0000 (0:00:01.964) 0:01:09.872 ********* 2026-03-17 00:50:18.234432 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:50:18.234438 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:50:18.234443 | orchestrator | ok: [testbed-manager] 2026-03-17 00:50:18.234449 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:50:18.234454 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:50:18.234460 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:50:18.234465 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:50:18.234471 | orchestrator | 2026-03-17 00:50:18.234476 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-03-17 00:50:18.234482 | orchestrator | Tuesday 17 March 2026 00:49:56 +0000 (0:00:02.115) 0:01:11.987 ********* 2026-03-17 00:50:18.234488 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:50:18.234493 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:50:18.234498 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:50:18.234504 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:50:18.234510 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:50:18.234515 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:50:18.234523 | orchestrator | ok: [testbed-manager] 2026-03-17 00:50:18.234530 | orchestrator | 2026-03-17 00:50:18.234536 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-03-17 00:50:18.234541 | orchestrator | Tuesday 17 March 2026 00:49:58 +0000 (0:00:02.533) 0:01:14.521 ********* 2026-03-17 00:50:18.234547 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-03-17 00:50:18.234554 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:50:18.234560 | orchestrator | 2026-03-17 00:50:18.234566 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-03-17 00:50:18.234572 | orchestrator | Tuesday 17 March 2026 00:50:01 +0000 (0:00:02.332) 0:01:16.853 ********* 2026-03-17 00:50:18.234577 | orchestrator | changed: [testbed-manager] 2026-03-17 00:50:18.234587 | orchestrator | 2026-03-17 00:50:18.234593 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-03-17 00:50:18.234598 | orchestrator | Tuesday 17 March 2026 00:50:03 +0000 (0:00:02.403) 0:01:19.257 ********* 2026-03-17 00:50:18.234604 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:50:18.234609 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:50:18.234615 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:50:18.234621 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:50:18.234627 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:50:18.234632 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:50:18.234638 | orchestrator | changed: [testbed-manager] 2026-03-17 00:50:18.234643 | orchestrator | 2026-03-17 00:50:18.234649 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:50:18.234655 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:50:18.234662 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:50:18.234692 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:50:18.234699 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:50:18.234706 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:50:18.234712 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:50:18.234718 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:50:18.234724 | orchestrator | 2026-03-17 00:50:18.234731 | orchestrator | 2026-03-17 00:50:18.234737 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:50:18.234743 | orchestrator | Tuesday 17 March 2026 00:50:15 +0000 (0:00:11.686) 0:01:30.944 ********* 2026-03-17 00:50:18.234749 | orchestrator | =============================================================================== 2026-03-17 00:50:18.234755 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 36.18s 2026-03-17 00:50:18.234761 | orchestrator | osism.services.netdata : Restart service netdata ----------------------- 11.69s 2026-03-17 00:50:18.234766 | orchestrator | osism.services.netdata : Add repository -------------------------------- 11.09s 2026-03-17 00:50:18.234772 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 6.15s 2026-03-17 00:50:18.234778 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 2.78s 2026-03-17 00:50:18.234783 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.53s 2026-03-17 00:50:18.234789 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.51s 2026-03-17 00:50:18.234795 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.43s 2026-03-17 00:50:18.234814 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.40s 2026-03-17 00:50:18.234820 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 2.33s 2026-03-17 00:50:18.234826 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 2.12s 2026-03-17 00:50:18.234836 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.96s 2026-03-17 00:50:18.234842 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.82s 2026-03-17 00:50:18.234848 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.73s 2026-03-17 00:50:18.234855 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.72s 2026-03-17 00:50:18.234865 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.90s 2026-03-17 00:50:18.234871 | orchestrator | 2026-03-17 00:50:18 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:50:18.234878 | orchestrator | 2026-03-17 00:50:18 | INFO  | Task 323a95f3-9f01-4f19-98fd-8b55b9f86b34 is in state STARTED 2026-03-17 00:50:18.236174 | orchestrator | 2026-03-17 00:50:18 | INFO  | Task 2a4fa572-deff-41c7-947d-fd4219b07b6f is in state STARTED 2026-03-17 00:50:18.237727 | orchestrator | 2026-03-17 00:50:18 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:50:18.237856 | orchestrator | 2026-03-17 00:50:18 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:50:21.286348 | orchestrator | 2026-03-17 00:50:21 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:50:21.289673 | orchestrator | 2026-03-17 00:50:21 | INFO  | Task 323a95f3-9f01-4f19-98fd-8b55b9f86b34 is in state STARTED 2026-03-17 00:50:21.290755 | orchestrator | 2026-03-17 00:50:21 | INFO  | Task 2a4fa572-deff-41c7-947d-fd4219b07b6f is in state STARTED 2026-03-17 00:50:21.293205 | orchestrator | 2026-03-17 00:50:21 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:50:21.293247 | orchestrator | 2026-03-17 00:50:21 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:50:24.379299 | orchestrator | 2026-03-17 00:50:24 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:50:24.382407 | orchestrator | 2026-03-17 00:50:24 | INFO  | Task 323a95f3-9f01-4f19-98fd-8b55b9f86b34 is in state STARTED 2026-03-17 00:50:24.386378 | orchestrator | 2026-03-17 00:50:24 | INFO  | Task 2a4fa572-deff-41c7-947d-fd4219b07b6f is in state STARTED 2026-03-17 00:50:24.389290 | orchestrator | 2026-03-17 00:50:24 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:50:24.389568 | orchestrator | 2026-03-17 00:50:24 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:50:27.430999 | orchestrator | 2026-03-17 00:50:27 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:50:27.433290 | orchestrator | 2026-03-17 00:50:27 | INFO  | Task 323a95f3-9f01-4f19-98fd-8b55b9f86b34 is in state STARTED 2026-03-17 00:50:27.435240 | orchestrator | 2026-03-17 00:50:27 | INFO  | Task 2a4fa572-deff-41c7-947d-fd4219b07b6f is in state STARTED 2026-03-17 00:50:27.438871 | orchestrator | 2026-03-17 00:50:27 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:50:27.439475 | orchestrator | 2026-03-17 00:50:27 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:50:30.491525 | orchestrator | 2026-03-17 00:50:30 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:50:30.496209 | orchestrator | 2026-03-17 00:50:30 | INFO  | Task 323a95f3-9f01-4f19-98fd-8b55b9f86b34 is in state SUCCESS 2026-03-17 00:50:30.496255 | orchestrator | 2026-03-17 00:50:30 | INFO  | Task 2a4fa572-deff-41c7-947d-fd4219b07b6f is in state STARTED 2026-03-17 00:50:30.499157 | orchestrator | 2026-03-17 00:50:30 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:50:30.499211 | orchestrator | 2026-03-17 00:50:30 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:50:33.564404 | orchestrator | 2026-03-17 00:50:33 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:50:33.568415 | orchestrator | 2026-03-17 00:50:33 | INFO  | Task 2a4fa572-deff-41c7-947d-fd4219b07b6f is in state STARTED 2026-03-17 00:50:33.571767 | orchestrator | 2026-03-17 00:50:33 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:50:33.571811 | orchestrator | 2026-03-17 00:50:33 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:50:36.626610 | orchestrator | 2026-03-17 00:50:36 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:50:36.630108 | orchestrator | 2026-03-17 00:50:36 | INFO  | Task 2a4fa572-deff-41c7-947d-fd4219b07b6f is in state STARTED 2026-03-17 00:50:36.630158 | orchestrator | 2026-03-17 00:50:36 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:50:36.630169 | orchestrator | 2026-03-17 00:50:36 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:50:39.680628 | orchestrator | 2026-03-17 00:50:39 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:50:39.682159 | orchestrator | 2026-03-17 00:50:39 | INFO  | Task 2a4fa572-deff-41c7-947d-fd4219b07b6f is in state STARTED 2026-03-17 00:50:39.683318 | orchestrator | 2026-03-17 00:50:39 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:50:39.683347 | orchestrator | 2026-03-17 00:50:39 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:50:42.715364 | orchestrator | 2026-03-17 00:50:42 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:50:42.715760 | orchestrator | 2026-03-17 00:50:42 | INFO  | Task 2a4fa572-deff-41c7-947d-fd4219b07b6f is in state STARTED 2026-03-17 00:50:42.716292 | orchestrator | 2026-03-17 00:50:42 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:50:42.716318 | orchestrator | 2026-03-17 00:50:42 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:50:45.749513 | orchestrator | 2026-03-17 00:50:45 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:50:45.750559 | orchestrator | 2026-03-17 00:50:45 | INFO  | Task 2a4fa572-deff-41c7-947d-fd4219b07b6f is in state STARTED 2026-03-17 00:50:45.751975 | orchestrator | 2026-03-17 00:50:45 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:50:45.752002 | orchestrator | 2026-03-17 00:50:45 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:50:48.800425 | orchestrator | 2026-03-17 00:50:48 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:50:48.801944 | orchestrator | 2026-03-17 00:50:48 | INFO  | Task 2a4fa572-deff-41c7-947d-fd4219b07b6f is in state STARTED 2026-03-17 00:50:48.803459 | orchestrator | 2026-03-17 00:50:48 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:50:48.803519 | orchestrator | 2026-03-17 00:50:48 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:50:51.837587 | orchestrator | 2026-03-17 00:50:51 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:50:51.838224 | orchestrator | 2026-03-17 00:50:51 | INFO  | Task 2a4fa572-deff-41c7-947d-fd4219b07b6f is in state STARTED 2026-03-17 00:50:51.839501 | orchestrator | 2026-03-17 00:50:51 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:50:51.839548 | orchestrator | 2026-03-17 00:50:51 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:50:54.883042 | orchestrator | 2026-03-17 00:50:54 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:50:54.884715 | orchestrator | 2026-03-17 00:50:54 | INFO  | Task 2a4fa572-deff-41c7-947d-fd4219b07b6f is in state STARTED 2026-03-17 00:50:54.886981 | orchestrator | 2026-03-17 00:50:54 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:50:54.887070 | orchestrator | 2026-03-17 00:50:54 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:50:57.927994 | orchestrator | 2026-03-17 00:50:57 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:50:57.929120 | orchestrator | 2026-03-17 00:50:57 | INFO  | Task 2a4fa572-deff-41c7-947d-fd4219b07b6f is in state STARTED 2026-03-17 00:50:57.930967 | orchestrator | 2026-03-17 00:50:57 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:50:57.931035 | orchestrator | 2026-03-17 00:50:57 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:51:00.965556 | orchestrator | 2026-03-17 00:51:00 | INFO  | Task d3b73d3f-e791-45da-9cd2-c65e3a4c0484 is in state STARTED 2026-03-17 00:51:00.965628 | orchestrator | 2026-03-17 00:51:00 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:51:00.966219 | orchestrator | 2026-03-17 00:51:00 | INFO  | Task 944c0cad-f9fc-4a0a-abad-4a1df731647b is in state STARTED 2026-03-17 00:51:00.966569 | orchestrator | 2026-03-17 00:51:00 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:51:00.967216 | orchestrator | 2026-03-17 00:51:00 | INFO  | Task 406bee1c-2a5f-4aae-b19a-05ed471edf68 is in state STARTED 2026-03-17 00:51:00.972610 | orchestrator | 2026-03-17 00:51:00 | INFO  | Task 2a4fa572-deff-41c7-947d-fd4219b07b6f is in state SUCCESS 2026-03-17 00:51:00.973918 | orchestrator | 2026-03-17 00:51:00.973962 | orchestrator | 2026-03-17 00:51:00.973968 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2026-03-17 00:51:00.973973 | orchestrator | 2026-03-17 00:51:00.973977 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2026-03-17 00:51:00.973982 | orchestrator | Tuesday 17 March 2026 00:49:04 +0000 (0:00:00.296) 0:00:00.296 ********* 2026-03-17 00:51:00.973986 | orchestrator | ok: [testbed-manager] 2026-03-17 00:51:00.973991 | orchestrator | 2026-03-17 00:51:00.973995 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2026-03-17 00:51:00.974000 | orchestrator | Tuesday 17 March 2026 00:49:06 +0000 (0:00:01.355) 0:00:01.652 ********* 2026-03-17 00:51:00.974006 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2026-03-17 00:51:00.974043 | orchestrator | 2026-03-17 00:51:00.974079 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2026-03-17 00:51:00.974087 | orchestrator | Tuesday 17 March 2026 00:49:07 +0000 (0:00:00.988) 0:00:02.641 ********* 2026-03-17 00:51:00.974093 | orchestrator | changed: [testbed-manager] 2026-03-17 00:51:00.974131 | orchestrator | 2026-03-17 00:51:00.974148 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2026-03-17 00:51:00.974155 | orchestrator | Tuesday 17 March 2026 00:49:08 +0000 (0:00:01.251) 0:00:03.892 ********* 2026-03-17 00:51:00.974161 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2026-03-17 00:51:00.974170 | orchestrator | ok: [testbed-manager] 2026-03-17 00:51:00.974179 | orchestrator | 2026-03-17 00:51:00.974185 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2026-03-17 00:51:00.974192 | orchestrator | Tuesday 17 March 2026 00:50:20 +0000 (0:01:11.627) 0:01:15.519 ********* 2026-03-17 00:51:00.974198 | orchestrator | changed: [testbed-manager] 2026-03-17 00:51:00.974204 | orchestrator | 2026-03-17 00:51:00.974210 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:51:00.974217 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:51:00.974224 | orchestrator | 2026-03-17 00:51:00.974230 | orchestrator | 2026-03-17 00:51:00.974236 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:51:00.974242 | orchestrator | Tuesday 17 March 2026 00:50:29 +0000 (0:00:09.596) 0:01:25.115 ********* 2026-03-17 00:51:00.974268 | orchestrator | =============================================================================== 2026-03-17 00:51:00.974274 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 71.63s 2026-03-17 00:51:00.974330 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 9.60s 2026-03-17 00:51:00.974335 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 1.36s 2026-03-17 00:51:00.974339 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.25s 2026-03-17 00:51:00.974343 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.99s 2026-03-17 00:51:00.974347 | orchestrator | 2026-03-17 00:51:00.974351 | orchestrator | 2026-03-17 00:51:00.974355 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-03-17 00:51:00.974358 | orchestrator | 2026-03-17 00:51:00.974362 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-17 00:51:00.974366 | orchestrator | Tuesday 17 March 2026 00:48:38 +0000 (0:00:00.252) 0:00:00.252 ********* 2026-03-17 00:51:00.974370 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:51:00.974375 | orchestrator | 2026-03-17 00:51:00.974379 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-03-17 00:51:00.974383 | orchestrator | Tuesday 17 March 2026 00:48:39 +0000 (0:00:01.037) 0:00:01.289 ********* 2026-03-17 00:51:00.974387 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-17 00:51:00.974390 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-17 00:51:00.974394 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-17 00:51:00.974398 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-17 00:51:00.974402 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-17 00:51:00.974405 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-17 00:51:00.974409 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-17 00:51:00.974413 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-17 00:51:00.974417 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-17 00:51:00.974421 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-17 00:51:00.974425 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-17 00:51:00.974429 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-17 00:51:00.974432 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-17 00:51:00.974436 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-17 00:51:00.974440 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-17 00:51:00.974444 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-17 00:51:00.974459 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-17 00:51:00.974463 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-17 00:51:00.974466 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-17 00:51:00.974471 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-17 00:51:00.974474 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-17 00:51:00.974478 | orchestrator | 2026-03-17 00:51:00.974487 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-17 00:51:00.974490 | orchestrator | Tuesday 17 March 2026 00:48:42 +0000 (0:00:03.462) 0:00:04.751 ********* 2026-03-17 00:51:00.974498 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:51:00.974503 | orchestrator | 2026-03-17 00:51:00.974507 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-03-17 00:51:00.974511 | orchestrator | Tuesday 17 March 2026 00:48:44 +0000 (0:00:01.369) 0:00:06.121 ********* 2026-03-17 00:51:00.974519 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-17 00:51:00.974526 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-17 00:51:00.974530 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-17 00:51:00.974534 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-17 00:51:00.974538 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-17 00:51:00.974545 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:51:00.974549 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-17 00:51:00.974559 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-17 00:51:00.974563 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:51:00.974567 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:51:00.974573 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:51:00.974585 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:51:00.974589 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:51:00.974601 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:51:00.974612 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:51:00.974619 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:51:00.974623 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:51:00.974627 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:51:00.974631 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:51:00.974635 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:51:00.974639 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:51:00.974643 | orchestrator | 2026-03-17 00:51:00.974647 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-03-17 00:51:00.974654 | orchestrator | Tuesday 17 March 2026 00:48:49 +0000 (0:00:05.418) 0:00:11.540 ********* 2026-03-17 00:51:00.974661 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-17 00:51:00.974665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:51:00.974669 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-17 00:51:00.974677 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:51:00.974681 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-17 00:51:00.974686 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:51:00.974690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:51:00.974694 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:51:00.974733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:51:00.974738 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:51:00.974748 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-17 00:51:00.974755 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:51:00.974762 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:51:00.974769 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-17 00:51:00.974775 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:51:00.974782 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:51:00.974795 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:51:00.974802 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:51:00.974808 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:51:00.974814 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:51:00.974822 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-17 00:51:00.974835 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:51:00.974848 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-17 00:51:00.974855 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:51:00.974862 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:51:00.974869 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:51:00.974875 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:51:00.974904 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:51:00.974910 | orchestrator | 2026-03-17 00:51:00.974916 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-03-17 00:51:00.974922 | orchestrator | Tuesday 17 March 2026 00:48:51 +0000 (0:00:01.846) 0:00:13.386 ********* 2026-03-17 00:51:00.974934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-17 00:51:00.974939 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-17 00:51:00.974951 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:51:00.974958 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:51:00.974962 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:51:00.974966 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-17 00:51:00.974970 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:51:00.974974 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:51:00.974982 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:51:00.974985 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:51:00.974989 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-17 00:51:00.975690 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:51:00.975726 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:51:00.975732 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:51:00.975736 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:51:00.975740 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:51:00.975744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-17 00:51:00.975748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:51:00.975752 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:51:00.975763 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:51:00.975767 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-17 00:51:00.975771 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:51:00.975785 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:51:00.975789 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:51:00.975793 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-17 00:51:00.975799 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:51:00.975803 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:51:00.975807 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:51:00.975810 | orchestrator | 2026-03-17 00:51:00.975814 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-03-17 00:51:00.975819 | orchestrator | Tuesday 17 March 2026 00:48:55 +0000 (0:00:04.325) 0:00:17.711 ********* 2026-03-17 00:51:00.975822 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:51:00.975829 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:51:00.975833 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:51:00.975837 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:51:00.975840 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:51:00.975844 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:51:00.975848 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:51:00.975851 | orchestrator | 2026-03-17 00:51:00.975855 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-03-17 00:51:00.975859 | orchestrator | Tuesday 17 March 2026 00:48:57 +0000 (0:00:01.677) 0:00:19.389 ********* 2026-03-17 00:51:00.975863 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:51:00.975866 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:51:00.975870 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:51:00.975874 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:51:00.975897 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:51:00.975901 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:51:00.975905 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:51:00.975909 | orchestrator | 2026-03-17 00:51:00.975913 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-03-17 00:51:00.975916 | orchestrator | Tuesday 17 March 2026 00:48:58 +0000 (0:00:01.399) 0:00:20.789 ********* 2026-03-17 00:51:00.975921 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-17 00:51:00.975925 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-17 00:51:00.975935 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-17 00:51:00.975939 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-17 00:51:00.975945 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:51:00.975952 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:51:00.975956 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-17 00:51:00.975960 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-17 00:51:00.975964 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:51:00.975968 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:51:00.975975 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:51:00.975979 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:51:00.975984 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:51:00.975990 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:51:00.975994 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-17 00:51:00.975998 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:51:00.976002 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:51:00.976008 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:51:00.976015 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:51:00.976021 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:51:00.976028 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:51:00.976033 | orchestrator | 2026-03-17 00:51:00.976037 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-03-17 00:51:00.976040 | orchestrator | Tuesday 17 March 2026 00:49:06 +0000 (0:00:07.380) 0:00:28.170 ********* 2026-03-17 00:51:00.976044 | orchestrator | [WARNING]: Skipped 2026-03-17 00:51:00.976049 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-03-17 00:51:00.976054 | orchestrator | to this access issue: 2026-03-17 00:51:00.976057 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-03-17 00:51:00.976061 | orchestrator | directory 2026-03-17 00:51:00.976065 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-17 00:51:00.976069 | orchestrator | 2026-03-17 00:51:00.976073 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-03-17 00:51:00.976076 | orchestrator | Tuesday 17 March 2026 00:49:07 +0000 (0:00:01.435) 0:00:29.606 ********* 2026-03-17 00:51:00.976080 | orchestrator | [WARNING]: Skipped 2026-03-17 00:51:00.976084 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-03-17 00:51:00.976088 | orchestrator | to this access issue: 2026-03-17 00:51:00.976092 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-03-17 00:51:00.976096 | orchestrator | directory 2026-03-17 00:51:00.976099 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-17 00:51:00.976103 | orchestrator | 2026-03-17 00:51:00.976107 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-03-17 00:51:00.976111 | orchestrator | Tuesday 17 March 2026 00:49:08 +0000 (0:00:01.003) 0:00:30.609 ********* 2026-03-17 00:51:00.976114 | orchestrator | [WARNING]: Skipped 2026-03-17 00:51:00.976118 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-03-17 00:51:00.976122 | orchestrator | to this access issue: 2026-03-17 00:51:00.976126 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-03-17 00:51:00.976129 | orchestrator | directory 2026-03-17 00:51:00.976133 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-17 00:51:00.976137 | orchestrator | 2026-03-17 00:51:00.976141 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-03-17 00:51:00.976144 | orchestrator | Tuesday 17 March 2026 00:49:09 +0000 (0:00:01.124) 0:00:31.734 ********* 2026-03-17 00:51:00.976148 | orchestrator | [WARNING]: Skipped 2026-03-17 00:51:00.976152 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-03-17 00:51:00.976156 | orchestrator | to this access issue: 2026-03-17 00:51:00.976160 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-03-17 00:51:00.976163 | orchestrator | directory 2026-03-17 00:51:00.976167 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-17 00:51:00.976171 | orchestrator | 2026-03-17 00:51:00.976175 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-03-17 00:51:00.976178 | orchestrator | Tuesday 17 March 2026 00:49:10 +0000 (0:00:00.957) 0:00:32.692 ********* 2026-03-17 00:51:00.976182 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:51:00.976186 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:51:00.976190 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:51:00.976193 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:51:00.976197 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:51:00.976203 | orchestrator | changed: [testbed-manager] 2026-03-17 00:51:00.976207 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:51:00.976210 | orchestrator | 2026-03-17 00:51:00.976214 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-03-17 00:51:00.976218 | orchestrator | Tuesday 17 March 2026 00:49:15 +0000 (0:00:04.374) 0:00:37.066 ********* 2026-03-17 00:51:00.976222 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-17 00:51:00.976226 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-17 00:51:00.976230 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-17 00:51:00.976236 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-17 00:51:00.976240 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-17 00:51:00.976243 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-17 00:51:00.976247 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-17 00:51:00.976251 | orchestrator | 2026-03-17 00:51:00.976255 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-03-17 00:51:00.976258 | orchestrator | Tuesday 17 March 2026 00:49:18 +0000 (0:00:03.450) 0:00:40.517 ********* 2026-03-17 00:51:00.976262 | orchestrator | changed: [testbed-manager] 2026-03-17 00:51:00.976266 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:51:00.976270 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:51:00.976273 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:51:00.976279 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:51:00.976283 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:51:00.976287 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:51:00.976290 | orchestrator | 2026-03-17 00:51:00.976294 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-03-17 00:51:00.976298 | orchestrator | Tuesday 17 March 2026 00:49:21 +0000 (0:00:02.608) 0:00:43.125 ********* 2026-03-17 00:51:00.976302 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-17 00:51:00.976306 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:51:00.976310 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-17 00:51:00.976316 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-17 00:51:00.976326 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:51:00.976335 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:51:00.976344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:51:00.976461 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:51:00.976475 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:51:00.976482 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-17 00:51:00.976488 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:51:00.976504 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-17 00:51:00.976511 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:51:00.976523 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:51:00.976529 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-17 00:51:00.976541 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:51:00.976547 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:51:00.976554 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-17 00:51:00.976559 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:51:00.976568 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:51:00.976574 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:51:00.976579 | orchestrator | 2026-03-17 00:51:00.976584 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-03-17 00:51:00.976593 | orchestrator | Tuesday 17 March 2026 00:49:24 +0000 (0:00:03.002) 0:00:46.128 ********* 2026-03-17 00:51:00.976602 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-17 00:51:00.976607 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-17 00:51:00.976613 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-17 00:51:00.976626 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-17 00:51:00.976632 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-17 00:51:00.976637 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-17 00:51:00.976643 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-17 00:51:00.976649 | orchestrator | 2026-03-17 00:51:00.976655 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-03-17 00:51:00.976660 | orchestrator | Tuesday 17 March 2026 00:49:26 +0000 (0:00:02.343) 0:00:48.472 ********* 2026-03-17 00:51:00.976667 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-17 00:51:00.976674 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-17 00:51:00.976682 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-17 00:51:00.976688 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-17 00:51:00.976694 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-17 00:51:00.976700 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-17 00:51:00.976706 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-17 00:51:00.976714 | orchestrator | 2026-03-17 00:51:00.976718 | orchestrator | TASK [common : Check common containers] **************************************** 2026-03-17 00:51:00.976722 | orchestrator | Tuesday 17 March 2026 00:49:29 +0000 (0:00:02.779) 0:00:51.252 ********* 2026-03-17 00:51:00.976726 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-17 00:51:00.976735 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-17 00:51:00.976739 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-17 00:51:00.976743 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-17 00:51:00.976747 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-17 00:51:00.976755 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:51:00.976761 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-17 00:51:00.976765 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-17 00:51:00.976772 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:51:00.976776 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:51:00.976780 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:51:00.976784 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:51:00.976793 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:51:00.976799 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:51:00.976803 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:51:00.976810 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:51:00.976814 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:51:00.976819 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:51:00.976822 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:51:00.976826 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:51:00.976830 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:51:00.976834 | orchestrator | 2026-03-17 00:51:00.976841 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-03-17 00:51:00.976845 | orchestrator | Tuesday 17 March 2026 00:49:32 +0000 (0:00:02.906) 0:00:54.158 ********* 2026-03-17 00:51:00.976849 | orchestrator | changed: [testbed-manager] 2026-03-17 00:51:00.976853 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:51:00.976856 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:51:00.976860 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:51:00.976864 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:51:00.976868 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:51:00.976871 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:51:00.976875 | orchestrator | 2026-03-17 00:51:00.976923 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-03-17 00:51:00.976928 | orchestrator | Tuesday 17 March 2026 00:49:33 +0000 (0:00:01.529) 0:00:55.688 ********* 2026-03-17 00:51:00.976935 | orchestrator | changed: [testbed-manager] 2026-03-17 00:51:00.976939 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:51:00.976943 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:51:00.976947 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:51:00.976950 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:51:00.976954 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:51:00.976960 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:51:00.976964 | orchestrator | 2026-03-17 00:51:00.976968 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-17 00:51:00.976972 | orchestrator | Tuesday 17 March 2026 00:49:34 +0000 (0:00:01.252) 0:00:56.940 ********* 2026-03-17 00:51:00.976976 | orchestrator | 2026-03-17 00:51:00.976979 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-17 00:51:00.976983 | orchestrator | Tuesday 17 March 2026 00:49:34 +0000 (0:00:00.063) 0:00:57.003 ********* 2026-03-17 00:51:00.976987 | orchestrator | 2026-03-17 00:51:00.976991 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-17 00:51:00.976995 | orchestrator | Tuesday 17 March 2026 00:49:35 +0000 (0:00:00.058) 0:00:57.061 ********* 2026-03-17 00:51:00.976998 | orchestrator | 2026-03-17 00:51:00.977002 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-17 00:51:00.977006 | orchestrator | Tuesday 17 March 2026 00:49:35 +0000 (0:00:00.058) 0:00:57.120 ********* 2026-03-17 00:51:00.977010 | orchestrator | 2026-03-17 00:51:00.977014 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-17 00:51:00.977017 | orchestrator | Tuesday 17 March 2026 00:49:35 +0000 (0:00:00.059) 0:00:57.179 ********* 2026-03-17 00:51:00.977021 | orchestrator | 2026-03-17 00:51:00.977025 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-17 00:51:00.977029 | orchestrator | Tuesday 17 March 2026 00:49:35 +0000 (0:00:00.058) 0:00:57.238 ********* 2026-03-17 00:51:00.977032 | orchestrator | 2026-03-17 00:51:00.977036 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-17 00:51:00.977040 | orchestrator | Tuesday 17 March 2026 00:49:35 +0000 (0:00:00.057) 0:00:57.295 ********* 2026-03-17 00:51:00.977046 | orchestrator | 2026-03-17 00:51:00.977054 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-03-17 00:51:00.977064 | orchestrator | Tuesday 17 March 2026 00:49:35 +0000 (0:00:00.080) 0:00:57.376 ********* 2026-03-17 00:51:00.977070 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:51:00.977076 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:51:00.977082 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:51:00.977088 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:51:00.977094 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:51:00.977101 | orchestrator | changed: [testbed-manager] 2026-03-17 00:51:00.977107 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:51:00.977113 | orchestrator | 2026-03-17 00:51:00.977119 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-03-17 00:51:00.977125 | orchestrator | Tuesday 17 March 2026 00:50:04 +0000 (0:00:29.363) 0:01:26.739 ********* 2026-03-17 00:51:00.977131 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:51:00.977138 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:51:00.977147 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:51:00.977155 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:51:00.977162 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:51:00.977168 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:51:00.977174 | orchestrator | changed: [testbed-manager] 2026-03-17 00:51:00.977180 | orchestrator | 2026-03-17 00:51:00.977187 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-03-17 00:51:00.977193 | orchestrator | Tuesday 17 March 2026 00:50:47 +0000 (0:00:42.630) 0:02:09.370 ********* 2026-03-17 00:51:00.977199 | orchestrator | ok: [testbed-manager] 2026-03-17 00:51:00.977206 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:51:00.977213 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:51:00.977219 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:51:00.977231 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:51:00.977237 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:51:00.977241 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:51:00.977244 | orchestrator | 2026-03-17 00:51:00.977248 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-03-17 00:51:00.977252 | orchestrator | Tuesday 17 March 2026 00:50:50 +0000 (0:00:02.746) 0:02:12.116 ********* 2026-03-17 00:51:00.977256 | orchestrator | changed: [testbed-manager] 2026-03-17 00:51:00.977260 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:51:00.977263 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:51:00.977267 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:51:00.977271 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:51:00.977274 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:51:00.977278 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:51:00.977282 | orchestrator | 2026-03-17 00:51:00.977285 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:51:00.977290 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-17 00:51:00.977295 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-17 00:51:00.977304 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-17 00:51:00.977308 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-17 00:51:00.977312 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-17 00:51:00.977316 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-17 00:51:00.977319 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-17 00:51:00.977323 | orchestrator | 2026-03-17 00:51:00.977327 | orchestrator | 2026-03-17 00:51:00.977331 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:51:00.977335 | orchestrator | Tuesday 17 March 2026 00:50:59 +0000 (0:00:09.346) 0:02:21.462 ********* 2026-03-17 00:51:00.977339 | orchestrator | =============================================================================== 2026-03-17 00:51:00.977342 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 42.63s 2026-03-17 00:51:00.977346 | orchestrator | common : Restart fluentd container ------------------------------------- 29.36s 2026-03-17 00:51:00.977350 | orchestrator | common : Restart cron container ----------------------------------------- 9.35s 2026-03-17 00:51:00.977354 | orchestrator | common : Copying over config.json files for services -------------------- 7.38s 2026-03-17 00:51:00.977357 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 5.42s 2026-03-17 00:51:00.977361 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 4.37s 2026-03-17 00:51:00.977365 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 4.33s 2026-03-17 00:51:00.977369 | orchestrator | common : Ensuring config directories exist ------------------------------ 3.46s 2026-03-17 00:51:00.977372 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.45s 2026-03-17 00:51:00.977376 | orchestrator | common : Ensuring config directories have correct owner and permission --- 3.00s 2026-03-17 00:51:00.977380 | orchestrator | common : Check common containers ---------------------------------------- 2.91s 2026-03-17 00:51:00.977384 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.78s 2026-03-17 00:51:00.977391 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.75s 2026-03-17 00:51:00.977395 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.61s 2026-03-17 00:51:00.977399 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.34s 2026-03-17 00:51:00.977402 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.85s 2026-03-17 00:51:00.977406 | orchestrator | common : Copying over /run subdirectories conf -------------------------- 1.68s 2026-03-17 00:51:00.977410 | orchestrator | common : Creating log volume -------------------------------------------- 1.53s 2026-03-17 00:51:00.977414 | orchestrator | common : Find custom fluentd input config files ------------------------- 1.44s 2026-03-17 00:51:00.977417 | orchestrator | common : Restart systemd-tmpfiles --------------------------------------- 1.40s 2026-03-17 00:51:00.977421 | orchestrator | 2026-03-17 00:51:00 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:51:00.977425 | orchestrator | 2026-03-17 00:51:00 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:51:03.997232 | orchestrator | 2026-03-17 00:51:03 | INFO  | Task d3b73d3f-e791-45da-9cd2-c65e3a4c0484 is in state STARTED 2026-03-17 00:51:03.997328 | orchestrator | 2026-03-17 00:51:03 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:51:03.998773 | orchestrator | 2026-03-17 00:51:03 | INFO  | Task 944c0cad-f9fc-4a0a-abad-4a1df731647b is in state STARTED 2026-03-17 00:51:03.998826 | orchestrator | 2026-03-17 00:51:03 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:51:03.999497 | orchestrator | 2026-03-17 00:51:03 | INFO  | Task 406bee1c-2a5f-4aae-b19a-05ed471edf68 is in state STARTED 2026-03-17 00:51:04.000201 | orchestrator | 2026-03-17 00:51:03 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:51:04.000237 | orchestrator | 2026-03-17 00:51:03 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:51:07.028851 | orchestrator | 2026-03-17 00:51:07 | INFO  | Task d3b73d3f-e791-45da-9cd2-c65e3a4c0484 is in state STARTED 2026-03-17 00:51:07.029329 | orchestrator | 2026-03-17 00:51:07 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:51:07.029885 | orchestrator | 2026-03-17 00:51:07 | INFO  | Task 944c0cad-f9fc-4a0a-abad-4a1df731647b is in state STARTED 2026-03-17 00:51:07.031748 | orchestrator | 2026-03-17 00:51:07 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:51:07.032381 | orchestrator | 2026-03-17 00:51:07 | INFO  | Task 406bee1c-2a5f-4aae-b19a-05ed471edf68 is in state STARTED 2026-03-17 00:51:07.033119 | orchestrator | 2026-03-17 00:51:07 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:51:07.033141 | orchestrator | 2026-03-17 00:51:07 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:51:10.066212 | orchestrator | 2026-03-17 00:51:10 | INFO  | Task d3b73d3f-e791-45da-9cd2-c65e3a4c0484 is in state STARTED 2026-03-17 00:51:10.066754 | orchestrator | 2026-03-17 00:51:10 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:51:10.067546 | orchestrator | 2026-03-17 00:51:10 | INFO  | Task 944c0cad-f9fc-4a0a-abad-4a1df731647b is in state STARTED 2026-03-17 00:51:10.068385 | orchestrator | 2026-03-17 00:51:10 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:51:10.069078 | orchestrator | 2026-03-17 00:51:10 | INFO  | Task 406bee1c-2a5f-4aae-b19a-05ed471edf68 is in state STARTED 2026-03-17 00:51:10.069931 | orchestrator | 2026-03-17 00:51:10 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:51:10.069962 | orchestrator | 2026-03-17 00:51:10 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:51:13.095778 | orchestrator | 2026-03-17 00:51:13 | INFO  | Task d3b73d3f-e791-45da-9cd2-c65e3a4c0484 is in state SUCCESS 2026-03-17 00:51:13.096067 | orchestrator | 2026-03-17 00:51:13 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:51:13.096760 | orchestrator | 2026-03-17 00:51:13 | INFO  | Task 944c0cad-f9fc-4a0a-abad-4a1df731647b is in state STARTED 2026-03-17 00:51:13.097967 | orchestrator | 2026-03-17 00:51:13 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:51:13.098843 | orchestrator | 2026-03-17 00:51:13 | INFO  | Task 406bee1c-2a5f-4aae-b19a-05ed471edf68 is in state STARTED 2026-03-17 00:51:13.100681 | orchestrator | 2026-03-17 00:51:13 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:51:13.100719 | orchestrator | 2026-03-17 00:51:13 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:51:16.239426 | orchestrator | 2026-03-17 00:51:16 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:51:16.239507 | orchestrator | 2026-03-17 00:51:16 | INFO  | Task 944c0cad-f9fc-4a0a-abad-4a1df731647b is in state STARTED 2026-03-17 00:51:16.239529 | orchestrator | 2026-03-17 00:51:16 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:51:16.239543 | orchestrator | 2026-03-17 00:51:16 | INFO  | Task 406bee1c-2a5f-4aae-b19a-05ed471edf68 is in state STARTED 2026-03-17 00:51:16.239549 | orchestrator | 2026-03-17 00:51:16 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:51:16.239556 | orchestrator | 2026-03-17 00:51:16 | INFO  | Task 0eb0c39d-0581-4072-9003-ea0bc985535f is in state STARTED 2026-03-17 00:51:16.239563 | orchestrator | 2026-03-17 00:51:16 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:51:19.175003 | orchestrator | 2026-03-17 00:51:19 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:51:19.176338 | orchestrator | 2026-03-17 00:51:19 | INFO  | Task 944c0cad-f9fc-4a0a-abad-4a1df731647b is in state STARTED 2026-03-17 00:51:19.177604 | orchestrator | 2026-03-17 00:51:19 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:51:19.181120 | orchestrator | 2026-03-17 00:51:19 | INFO  | Task 406bee1c-2a5f-4aae-b19a-05ed471edf68 is in state STARTED 2026-03-17 00:51:19.181577 | orchestrator | 2026-03-17 00:51:19 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:51:19.182308 | orchestrator | 2026-03-17 00:51:19 | INFO  | Task 0eb0c39d-0581-4072-9003-ea0bc985535f is in state STARTED 2026-03-17 00:51:19.182327 | orchestrator | 2026-03-17 00:51:19 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:51:22.225773 | orchestrator | 2026-03-17 00:51:22 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:51:22.225849 | orchestrator | 2026-03-17 00:51:22 | INFO  | Task 944c0cad-f9fc-4a0a-abad-4a1df731647b is in state STARTED 2026-03-17 00:51:22.226544 | orchestrator | 2026-03-17 00:51:22 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:51:22.227194 | orchestrator | 2026-03-17 00:51:22 | INFO  | Task 406bee1c-2a5f-4aae-b19a-05ed471edf68 is in state STARTED 2026-03-17 00:51:22.228061 | orchestrator | 2026-03-17 00:51:22 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:51:22.228905 | orchestrator | 2026-03-17 00:51:22 | INFO  | Task 0eb0c39d-0581-4072-9003-ea0bc985535f is in state STARTED 2026-03-17 00:51:22.229476 | orchestrator | 2026-03-17 00:51:22 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:51:25.365836 | orchestrator | 2026-03-17 00:51:25 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:51:25.366129 | orchestrator | 2026-03-17 00:51:25 | INFO  | Task 944c0cad-f9fc-4a0a-abad-4a1df731647b is in state STARTED 2026-03-17 00:51:25.366682 | orchestrator | 2026-03-17 00:51:25 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:51:25.368979 | orchestrator | 2026-03-17 00:51:25 | INFO  | Task 406bee1c-2a5f-4aae-b19a-05ed471edf68 is in state STARTED 2026-03-17 00:51:25.369742 | orchestrator | 2026-03-17 00:51:25 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:51:25.370322 | orchestrator | 2026-03-17 00:51:25 | INFO  | Task 0eb0c39d-0581-4072-9003-ea0bc985535f is in state STARTED 2026-03-17 00:51:25.370349 | orchestrator | 2026-03-17 00:51:25 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:51:28.403824 | orchestrator | 2026-03-17 00:51:28 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:51:28.404068 | orchestrator | 2026-03-17 00:51:28 | INFO  | Task 944c0cad-f9fc-4a0a-abad-4a1df731647b is in state STARTED 2026-03-17 00:51:28.405162 | orchestrator | 2026-03-17 00:51:28 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:51:28.406688 | orchestrator | 2026-03-17 00:51:28 | INFO  | Task 406bee1c-2a5f-4aae-b19a-05ed471edf68 is in state SUCCESS 2026-03-17 00:51:28.407444 | orchestrator | 2026-03-17 00:51:28.407469 | orchestrator | 2026-03-17 00:51:28.407474 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-17 00:51:28.407480 | orchestrator | 2026-03-17 00:51:28.407484 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-17 00:51:28.407489 | orchestrator | Tuesday 17 March 2026 00:51:02 +0000 (0:00:00.258) 0:00:00.258 ********* 2026-03-17 00:51:28.407493 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:51:28.407498 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:51:28.407502 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:51:28.407506 | orchestrator | 2026-03-17 00:51:28.407510 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-17 00:51:28.407514 | orchestrator | Tuesday 17 March 2026 00:51:03 +0000 (0:00:00.307) 0:00:00.566 ********* 2026-03-17 00:51:28.407519 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-03-17 00:51:28.407523 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-03-17 00:51:28.407527 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-03-17 00:51:28.407531 | orchestrator | 2026-03-17 00:51:28.407535 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-03-17 00:51:28.407539 | orchestrator | 2026-03-17 00:51:28.407543 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-03-17 00:51:28.407547 | orchestrator | Tuesday 17 March 2026 00:51:03 +0000 (0:00:00.286) 0:00:00.852 ********* 2026-03-17 00:51:28.407551 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:51:28.407555 | orchestrator | 2026-03-17 00:51:28.407559 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-03-17 00:51:28.407563 | orchestrator | Tuesday 17 March 2026 00:51:03 +0000 (0:00:00.445) 0:00:01.298 ********* 2026-03-17 00:51:28.407567 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-03-17 00:51:28.407572 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-03-17 00:51:28.407576 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-03-17 00:51:28.407579 | orchestrator | 2026-03-17 00:51:28.407583 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-03-17 00:51:28.407587 | orchestrator | Tuesday 17 March 2026 00:51:05 +0000 (0:00:01.580) 0:00:02.879 ********* 2026-03-17 00:51:28.407609 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-03-17 00:51:28.407613 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-03-17 00:51:28.407617 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-03-17 00:51:28.407621 | orchestrator | 2026-03-17 00:51:28.407624 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2026-03-17 00:51:28.407628 | orchestrator | Tuesday 17 March 2026 00:51:06 +0000 (0:00:01.537) 0:00:04.416 ********* 2026-03-17 00:51:28.407632 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:51:28.407636 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:51:28.407640 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:51:28.407643 | orchestrator | 2026-03-17 00:51:28.407647 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-03-17 00:51:28.407654 | orchestrator | Tuesday 17 March 2026 00:51:08 +0000 (0:00:01.687) 0:00:06.104 ********* 2026-03-17 00:51:28.407660 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:51:28.407666 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:51:28.407672 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:51:28.407678 | orchestrator | 2026-03-17 00:51:28.407683 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:51:28.407690 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:51:28.407698 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:51:28.407704 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:51:28.407710 | orchestrator | 2026-03-17 00:51:28.407716 | orchestrator | 2026-03-17 00:51:28.407722 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:51:28.407728 | orchestrator | Tuesday 17 March 2026 00:51:11 +0000 (0:00:03.234) 0:00:09.338 ********* 2026-03-17 00:51:28.407735 | orchestrator | =============================================================================== 2026-03-17 00:51:28.407741 | orchestrator | memcached : Restart memcached container --------------------------------- 3.23s 2026-03-17 00:51:28.407748 | orchestrator | memcached : Check memcached container ----------------------------------- 1.69s 2026-03-17 00:51:28.407768 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.58s 2026-03-17 00:51:28.407775 | orchestrator | memcached : Copying over config.json files for services ----------------- 1.54s 2026-03-17 00:51:28.407781 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.45s 2026-03-17 00:51:28.407787 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.31s 2026-03-17 00:51:28.407794 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.29s 2026-03-17 00:51:28.407800 | orchestrator | 2026-03-17 00:51:28.407803 | orchestrator | 2026-03-17 00:51:28.407807 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-17 00:51:28.407811 | orchestrator | 2026-03-17 00:51:28.407815 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-17 00:51:28.407819 | orchestrator | Tuesday 17 March 2026 00:51:02 +0000 (0:00:00.308) 0:00:00.308 ********* 2026-03-17 00:51:28.407822 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:51:28.407826 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:51:28.407830 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:51:28.407834 | orchestrator | 2026-03-17 00:51:28.407838 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-17 00:51:28.407849 | orchestrator | Tuesday 17 March 2026 00:51:03 +0000 (0:00:00.291) 0:00:00.600 ********* 2026-03-17 00:51:28.407853 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-03-17 00:51:28.407857 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-03-17 00:51:28.407861 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-03-17 00:51:28.407869 | orchestrator | 2026-03-17 00:51:28.407873 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-03-17 00:51:28.407877 | orchestrator | 2026-03-17 00:51:28.407880 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-03-17 00:51:28.407884 | orchestrator | Tuesday 17 March 2026 00:51:03 +0000 (0:00:00.309) 0:00:00.910 ********* 2026-03-17 00:51:28.407888 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:51:28.407892 | orchestrator | 2026-03-17 00:51:28.407895 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-03-17 00:51:28.407899 | orchestrator | Tuesday 17 March 2026 00:51:03 +0000 (0:00:00.457) 0:00:01.367 ********* 2026-03-17 00:51:28.407905 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-17 00:51:28.407913 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-17 00:51:28.407918 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-17 00:51:28.407948 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-17 00:51:28.407955 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-17 00:51:28.407964 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-17 00:51:28.407972 | orchestrator | 2026-03-17 00:51:28.407976 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-03-17 00:51:28.407979 | orchestrator | Tuesday 17 March 2026 00:51:06 +0000 (0:00:02.181) 0:00:03.549 ********* 2026-03-17 00:51:28.407983 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-17 00:51:28.407988 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-17 00:51:28.407992 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-17 00:51:28.407996 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-17 00:51:28.408002 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-17 00:51:28.408010 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-17 00:51:28.408017 | orchestrator | 2026-03-17 00:51:28.408021 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-03-17 00:51:28.408025 | orchestrator | Tuesday 17 March 2026 00:51:08 +0000 (0:00:02.253) 0:00:05.802 ********* 2026-03-17 00:51:28.408029 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-17 00:51:28.408033 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-17 00:51:28.408037 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-17 00:51:28.408041 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-17 00:51:28.408047 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-17 00:51:28.408055 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-17 00:51:28.408059 | orchestrator | 2026-03-17 00:51:28.408066 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2026-03-17 00:51:28.408070 | orchestrator | Tuesday 17 March 2026 00:51:10 +0000 (0:00:02.583) 0:00:08.386 ********* 2026-03-17 00:51:28.408075 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-17 00:51:28.408079 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-17 00:51:28.408084 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-17 00:51:28.408088 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-17 00:51:28.408093 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-17 00:51:28.408103 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-17 00:51:28.408108 | orchestrator | 2026-03-17 00:51:28.408112 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-17 00:51:28.408117 | orchestrator | Tuesday 17 March 2026 00:51:12 +0000 (0:00:01.908) 0:00:10.295 ********* 2026-03-17 00:51:28.408121 | orchestrator | 2026-03-17 00:51:28.408125 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-17 00:51:28.408132 | orchestrator | Tuesday 17 March 2026 00:51:13 +0000 (0:00:00.438) 0:00:10.734 ********* 2026-03-17 00:51:28.408136 | orchestrator | 2026-03-17 00:51:28.408140 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-17 00:51:28.408144 | orchestrator | Tuesday 17 March 2026 00:51:13 +0000 (0:00:00.074) 0:00:10.808 ********* 2026-03-17 00:51:28.408149 | orchestrator | 2026-03-17 00:51:28.408153 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-03-17 00:51:28.408157 | orchestrator | Tuesday 17 March 2026 00:51:13 +0000 (0:00:00.086) 0:00:10.895 ********* 2026-03-17 00:51:28.408162 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:51:28.408166 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:51:28.408170 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:51:28.408174 | orchestrator | 2026-03-17 00:51:28.408179 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-03-17 00:51:28.408183 | orchestrator | Tuesday 17 March 2026 00:51:18 +0000 (0:00:04.964) 0:00:15.859 ********* 2026-03-17 00:51:28.408187 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:51:28.408192 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:51:28.408196 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:51:28.408200 | orchestrator | 2026-03-17 00:51:28.408204 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:51:28.408209 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:51:28.408213 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:51:28.408217 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:51:28.408222 | orchestrator | 2026-03-17 00:51:28.408226 | orchestrator | 2026-03-17 00:51:28.408230 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:51:28.408234 | orchestrator | Tuesday 17 March 2026 00:51:26 +0000 (0:00:08.313) 0:00:24.172 ********* 2026-03-17 00:51:28.408239 | orchestrator | =============================================================================== 2026-03-17 00:51:28.408243 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 8.31s 2026-03-17 00:51:28.408247 | orchestrator | redis : Restart redis container ----------------------------------------- 4.96s 2026-03-17 00:51:28.408251 | orchestrator | redis : Copying over redis config files --------------------------------- 2.58s 2026-03-17 00:51:28.408255 | orchestrator | redis : Copying over default config.json files -------------------------- 2.25s 2026-03-17 00:51:28.408260 | orchestrator | redis : Ensuring config directories exist ------------------------------- 2.18s 2026-03-17 00:51:28.408264 | orchestrator | redis : Check redis containers ------------------------------------------ 1.91s 2026-03-17 00:51:28.408272 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.60s 2026-03-17 00:51:28.408276 | orchestrator | redis : include_tasks --------------------------------------------------- 0.46s 2026-03-17 00:51:28.408280 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.31s 2026-03-17 00:51:28.408285 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.29s 2026-03-17 00:51:28.408689 | orchestrator | 2026-03-17 00:51:28 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:51:28.411788 | orchestrator | 2026-03-17 00:51:28 | INFO  | Task 0eb0c39d-0581-4072-9003-ea0bc985535f is in state STARTED 2026-03-17 00:51:28.411873 | orchestrator | 2026-03-17 00:51:28 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:51:31.453477 | orchestrator | 2026-03-17 00:51:31 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:51:31.454109 | orchestrator | 2026-03-17 00:51:31 | INFO  | Task 944c0cad-f9fc-4a0a-abad-4a1df731647b is in state STARTED 2026-03-17 00:51:31.455035 | orchestrator | 2026-03-17 00:51:31 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:51:31.455861 | orchestrator | 2026-03-17 00:51:31 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:51:31.456705 | orchestrator | 2026-03-17 00:51:31 | INFO  | Task 0eb0c39d-0581-4072-9003-ea0bc985535f is in state STARTED 2026-03-17 00:51:31.456738 | orchestrator | 2026-03-17 00:51:31 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:51:34.493540 | orchestrator | 2026-03-17 00:51:34 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:51:34.494089 | orchestrator | 2026-03-17 00:51:34 | INFO  | Task 944c0cad-f9fc-4a0a-abad-4a1df731647b is in state STARTED 2026-03-17 00:51:34.495043 | orchestrator | 2026-03-17 00:51:34 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:51:34.495902 | orchestrator | 2026-03-17 00:51:34 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:51:34.497055 | orchestrator | 2026-03-17 00:51:34 | INFO  | Task 0eb0c39d-0581-4072-9003-ea0bc985535f is in state STARTED 2026-03-17 00:51:34.497088 | orchestrator | 2026-03-17 00:51:34 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:51:37.539420 | orchestrator | 2026-03-17 00:51:37 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:51:37.541041 | orchestrator | 2026-03-17 00:51:37 | INFO  | Task 944c0cad-f9fc-4a0a-abad-4a1df731647b is in state STARTED 2026-03-17 00:51:37.542314 | orchestrator | 2026-03-17 00:51:37 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:51:37.544009 | orchestrator | 2026-03-17 00:51:37 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:51:37.545267 | orchestrator | 2026-03-17 00:51:37 | INFO  | Task 0eb0c39d-0581-4072-9003-ea0bc985535f is in state STARTED 2026-03-17 00:51:37.545312 | orchestrator | 2026-03-17 00:51:37 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:51:40.581552 | orchestrator | 2026-03-17 00:51:40 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:51:40.583060 | orchestrator | 2026-03-17 00:51:40 | INFO  | Task 944c0cad-f9fc-4a0a-abad-4a1df731647b is in state STARTED 2026-03-17 00:51:40.586492 | orchestrator | 2026-03-17 00:51:40 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:51:40.588517 | orchestrator | 2026-03-17 00:51:40 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:51:40.592272 | orchestrator | 2026-03-17 00:51:40 | INFO  | Task 0eb0c39d-0581-4072-9003-ea0bc985535f is in state STARTED 2026-03-17 00:51:40.592339 | orchestrator | 2026-03-17 00:51:40 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:51:43.618759 | orchestrator | 2026-03-17 00:51:43 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:51:43.619303 | orchestrator | 2026-03-17 00:51:43 | INFO  | Task 944c0cad-f9fc-4a0a-abad-4a1df731647b is in state STARTED 2026-03-17 00:51:43.621265 | orchestrator | 2026-03-17 00:51:43 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:51:43.621763 | orchestrator | 2026-03-17 00:51:43 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:51:43.622639 | orchestrator | 2026-03-17 00:51:43 | INFO  | Task 0eb0c39d-0581-4072-9003-ea0bc985535f is in state STARTED 2026-03-17 00:51:43.622676 | orchestrator | 2026-03-17 00:51:43 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:51:46.714781 | orchestrator | 2026-03-17 00:51:46 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:51:46.714864 | orchestrator | 2026-03-17 00:51:46 | INFO  | Task 944c0cad-f9fc-4a0a-abad-4a1df731647b is in state STARTED 2026-03-17 00:51:46.714870 | orchestrator | 2026-03-17 00:51:46 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:51:46.714875 | orchestrator | 2026-03-17 00:51:46 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:51:46.714879 | orchestrator | 2026-03-17 00:51:46 | INFO  | Task 0eb0c39d-0581-4072-9003-ea0bc985535f is in state STARTED 2026-03-17 00:51:46.714884 | orchestrator | 2026-03-17 00:51:46 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:51:49.727115 | orchestrator | 2026-03-17 00:51:49 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:51:49.728045 | orchestrator | 2026-03-17 00:51:49 | INFO  | Task 944c0cad-f9fc-4a0a-abad-4a1df731647b is in state STARTED 2026-03-17 00:51:49.728861 | orchestrator | 2026-03-17 00:51:49 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:51:49.731742 | orchestrator | 2026-03-17 00:51:49 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:51:49.732485 | orchestrator | 2026-03-17 00:51:49 | INFO  | Task 0eb0c39d-0581-4072-9003-ea0bc985535f is in state STARTED 2026-03-17 00:51:49.732541 | orchestrator | 2026-03-17 00:51:49 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:51:52.781817 | orchestrator | 2026-03-17 00:51:52 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:51:52.781861 | orchestrator | 2026-03-17 00:51:52 | INFO  | Task 944c0cad-f9fc-4a0a-abad-4a1df731647b is in state STARTED 2026-03-17 00:51:52.781865 | orchestrator | 2026-03-17 00:51:52 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:51:52.782120 | orchestrator | 2026-03-17 00:51:52 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:51:52.783582 | orchestrator | 2026-03-17 00:51:52 | INFO  | Task 0eb0c39d-0581-4072-9003-ea0bc985535f is in state STARTED 2026-03-17 00:51:52.783617 | orchestrator | 2026-03-17 00:51:52 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:51:55.834002 | orchestrator | 2026-03-17 00:51:55 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:51:55.835303 | orchestrator | 2026-03-17 00:51:55 | INFO  | Task 944c0cad-f9fc-4a0a-abad-4a1df731647b is in state STARTED 2026-03-17 00:51:55.835350 | orchestrator | 2026-03-17 00:51:55 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:51:55.835884 | orchestrator | 2026-03-17 00:51:55 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:51:55.836753 | orchestrator | 2026-03-17 00:51:55 | INFO  | Task 0eb0c39d-0581-4072-9003-ea0bc985535f is in state STARTED 2026-03-17 00:51:55.836798 | orchestrator | 2026-03-17 00:51:55 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:51:58.989388 | orchestrator | 2026-03-17 00:51:58 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:51:58.989915 | orchestrator | 2026-03-17 00:51:58 | INFO  | Task 944c0cad-f9fc-4a0a-abad-4a1df731647b is in state STARTED 2026-03-17 00:51:58.989937 | orchestrator | 2026-03-17 00:51:58 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:51:58.990001 | orchestrator | 2026-03-17 00:51:58 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:51:58.990007 | orchestrator | 2026-03-17 00:51:58 | INFO  | Task 0eb0c39d-0581-4072-9003-ea0bc985535f is in state STARTED 2026-03-17 00:51:58.990061 | orchestrator | 2026-03-17 00:51:58 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:52:02.030179 | orchestrator | 2026-03-17 00:52:02 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:52:02.033021 | orchestrator | 2026-03-17 00:52:02 | INFO  | Task 944c0cad-f9fc-4a0a-abad-4a1df731647b is in state STARTED 2026-03-17 00:52:02.035784 | orchestrator | 2026-03-17 00:52:02 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:52:02.038393 | orchestrator | 2026-03-17 00:52:02 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:52:02.040540 | orchestrator | 2026-03-17 00:52:02 | INFO  | Task 0eb0c39d-0581-4072-9003-ea0bc985535f is in state STARTED 2026-03-17 00:52:02.040600 | orchestrator | 2026-03-17 00:52:02 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:52:05.075716 | orchestrator | 2026-03-17 00:52:05 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:52:05.076929 | orchestrator | 2026-03-17 00:52:05 | INFO  | Task 944c0cad-f9fc-4a0a-abad-4a1df731647b is in state SUCCESS 2026-03-17 00:52:05.078294 | orchestrator | 2026-03-17 00:52:05.078337 | orchestrator | 2026-03-17 00:52:05.078345 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-17 00:52:05.078353 | orchestrator | 2026-03-17 00:52:05.078359 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-17 00:52:05.078366 | orchestrator | Tuesday 17 March 2026 00:51:03 +0000 (0:00:00.296) 0:00:00.296 ********* 2026-03-17 00:52:05.078372 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:52:05.078380 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:52:05.078386 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:52:05.078393 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:52:05.078399 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:52:05.078405 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:52:05.078414 | orchestrator | 2026-03-17 00:52:05.078421 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-17 00:52:05.078427 | orchestrator | Tuesday 17 March 2026 00:51:03 +0000 (0:00:00.613) 0:00:00.909 ********* 2026-03-17 00:52:05.078432 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-17 00:52:05.078444 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-17 00:52:05.078462 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-17 00:52:05.078477 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-17 00:52:05.078510 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-17 00:52:05.078516 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-17 00:52:05.078523 | orchestrator | 2026-03-17 00:52:05.078529 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-03-17 00:52:05.078536 | orchestrator | 2026-03-17 00:52:05.078542 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-03-17 00:52:05.078549 | orchestrator | Tuesday 17 March 2026 00:51:04 +0000 (0:00:00.815) 0:00:01.725 ********* 2026-03-17 00:52:05.078556 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:52:05.078564 | orchestrator | 2026-03-17 00:52:05.078580 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-17 00:52:05.078586 | orchestrator | Tuesday 17 March 2026 00:51:05 +0000 (0:00:01.153) 0:00:02.879 ********* 2026-03-17 00:52:05.078593 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-03-17 00:52:05.078600 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-03-17 00:52:05.078606 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-03-17 00:52:05.078612 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-03-17 00:52:05.078619 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-03-17 00:52:05.078625 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-03-17 00:52:05.078632 | orchestrator | 2026-03-17 00:52:05.078638 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-17 00:52:05.078645 | orchestrator | Tuesday 17 March 2026 00:51:07 +0000 (0:00:01.613) 0:00:04.492 ********* 2026-03-17 00:52:05.078651 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-03-17 00:52:05.078656 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-03-17 00:52:05.078662 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-03-17 00:52:05.078668 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-03-17 00:52:05.078674 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-03-17 00:52:05.078680 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-03-17 00:52:05.078686 | orchestrator | 2026-03-17 00:52:05.078692 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-17 00:52:05.078698 | orchestrator | Tuesday 17 March 2026 00:51:08 +0000 (0:00:01.439) 0:00:05.931 ********* 2026-03-17 00:52:05.078704 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-03-17 00:52:05.078710 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:52:05.078718 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-03-17 00:52:05.078724 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:52:05.078731 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-03-17 00:52:05.078737 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:52:05.078743 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-03-17 00:52:05.078750 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:52:05.078756 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-03-17 00:52:05.078763 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:52:05.078769 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-03-17 00:52:05.078775 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:52:05.078782 | orchestrator | 2026-03-17 00:52:05.078788 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-03-17 00:52:05.078794 | orchestrator | Tuesday 17 March 2026 00:51:09 +0000 (0:00:01.174) 0:00:07.106 ********* 2026-03-17 00:52:05.078800 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:52:05.078807 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:52:05.078813 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:52:05.078819 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:52:05.078831 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:52:05.078838 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:52:05.078845 | orchestrator | 2026-03-17 00:52:05.078851 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-03-17 00:52:05.078858 | orchestrator | Tuesday 17 March 2026 00:51:10 +0000 (0:00:00.616) 0:00:07.723 ********* 2026-03-17 00:52:05.078880 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-17 00:52:05.078894 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-17 00:52:05.078901 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-17 00:52:05.078908 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-17 00:52:05.078915 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-17 00:52:05.078923 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-17 00:52:05.078940 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-17 00:52:05.078950 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-17 00:52:05.078958 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-17 00:52:05.078965 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-17 00:52:05.078972 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-17 00:52:05.079009 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-17 00:52:05.079016 | orchestrator | 2026-03-17 00:52:05.079024 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-03-17 00:52:05.079031 | orchestrator | Tuesday 17 March 2026 00:51:12 +0000 (0:00:01.597) 0:00:09.320 ********* 2026-03-17 00:52:05.079043 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-17 00:52:05.079050 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-17 00:52:05.079058 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-17 00:52:05.079065 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-17 00:52:05.079077 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-17 00:52:05.079095 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-17 00:52:05.079105 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-17 00:52:05.079113 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-17 00:52:05.079120 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-17 00:52:05.079127 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-17 00:52:05.079139 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-17 00:52:05.079152 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-17 00:52:05.079159 | orchestrator | 2026-03-17 00:52:05.079166 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-03-17 00:52:05.079173 | orchestrator | Tuesday 17 March 2026 00:51:15 +0000 (0:00:03.255) 0:00:12.576 ********* 2026-03-17 00:52:05.079179 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:52:05.079185 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:52:05.079191 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:52:05.079197 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:52:05.079203 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:52:05.079209 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:52:05.079215 | orchestrator | 2026-03-17 00:52:05.079221 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2026-03-17 00:52:05.079230 | orchestrator | Tuesday 17 March 2026 00:51:16 +0000 (0:00:01.437) 0:00:14.013 ********* 2026-03-17 00:52:05.079237 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-17 00:52:05.079244 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-17 00:52:05.079254 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-17 00:52:05.079261 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-17 00:52:05.079271 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-17 00:52:05.079281 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-17 00:52:05.079287 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-17 00:52:05.079294 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-17 00:52:05.079307 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-17 00:52:05.079313 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-17 00:52:05.079322 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-17 00:52:05.079332 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-17 00:52:05.079339 | orchestrator | 2026-03-17 00:52:05.079345 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-17 00:52:05.079352 | orchestrator | Tuesday 17 March 2026 00:51:19 +0000 (0:00:02.621) 0:00:16.634 ********* 2026-03-17 00:52:05.079357 | orchestrator | 2026-03-17 00:52:05.079363 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-17 00:52:05.079369 | orchestrator | Tuesday 17 March 2026 00:51:19 +0000 (0:00:00.179) 0:00:16.814 ********* 2026-03-17 00:52:05.079374 | orchestrator | 2026-03-17 00:52:05.079380 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-17 00:52:05.079386 | orchestrator | Tuesday 17 March 2026 00:51:19 +0000 (0:00:00.123) 0:00:16.937 ********* 2026-03-17 00:52:05.079392 | orchestrator | 2026-03-17 00:52:05.079397 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-17 00:52:05.079409 | orchestrator | Tuesday 17 March 2026 00:51:19 +0000 (0:00:00.185) 0:00:17.123 ********* 2026-03-17 00:52:05.079415 | orchestrator | 2026-03-17 00:52:05.079420 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-17 00:52:05.079426 | orchestrator | Tuesday 17 March 2026 00:51:20 +0000 (0:00:00.452) 0:00:17.575 ********* 2026-03-17 00:52:05.079431 | orchestrator | 2026-03-17 00:52:05.079437 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-17 00:52:05.079443 | orchestrator | Tuesday 17 March 2026 00:51:20 +0000 (0:00:00.164) 0:00:17.740 ********* 2026-03-17 00:52:05.079448 | orchestrator | 2026-03-17 00:52:05.079454 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-03-17 00:52:05.079460 | orchestrator | Tuesday 17 March 2026 00:51:20 +0000 (0:00:00.145) 0:00:17.886 ********* 2026-03-17 00:52:05.079465 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:52:05.079471 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:52:05.079476 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:52:05.079482 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:52:05.079488 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:52:05.079493 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:52:05.079499 | orchestrator | 2026-03-17 00:52:05.079505 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-03-17 00:52:05.079511 | orchestrator | Tuesday 17 March 2026 00:51:30 +0000 (0:00:09.963) 0:00:27.849 ********* 2026-03-17 00:52:05.079517 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:52:05.079523 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:52:05.079529 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:52:05.079534 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:52:05.079540 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:52:05.079545 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:52:05.079552 | orchestrator | 2026-03-17 00:52:05.079557 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-03-17 00:52:05.079563 | orchestrator | Tuesday 17 March 2026 00:51:31 +0000 (0:00:01.340) 0:00:29.189 ********* 2026-03-17 00:52:05.079569 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:52:05.079575 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:52:05.079580 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:52:05.079586 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:52:05.079591 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:52:05.079597 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:52:05.079602 | orchestrator | 2026-03-17 00:52:05.079608 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-03-17 00:52:05.079613 | orchestrator | Tuesday 17 March 2026 00:51:40 +0000 (0:00:08.684) 0:00:37.874 ********* 2026-03-17 00:52:05.079618 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-03-17 00:52:05.079624 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-03-17 00:52:05.079630 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-03-17 00:52:05.079637 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-03-17 00:52:05.079642 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-03-17 00:52:05.079652 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-03-17 00:52:05.079658 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-03-17 00:52:05.079664 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-03-17 00:52:05.079670 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-03-17 00:52:05.079682 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-03-17 00:52:05.079688 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-03-17 00:52:05.079693 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-03-17 00:52:05.079703 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-17 00:52:05.079708 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-17 00:52:05.079714 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-17 00:52:05.079720 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-17 00:52:05.079726 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-17 00:52:05.079732 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-17 00:52:05.079738 | orchestrator | 2026-03-17 00:52:05.079744 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-03-17 00:52:05.079801 | orchestrator | Tuesday 17 March 2026 00:51:48 +0000 (0:00:08.336) 0:00:46.210 ********* 2026-03-17 00:52:05.079807 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-03-17 00:52:05.079813 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:52:05.079818 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-03-17 00:52:05.079824 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:52:05.079829 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-03-17 00:52:05.079835 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:52:05.079842 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-03-17 00:52:05.079848 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-03-17 00:52:05.079853 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-03-17 00:52:05.079859 | orchestrator | 2026-03-17 00:52:05.079865 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-03-17 00:52:05.079871 | orchestrator | Tuesday 17 March 2026 00:51:51 +0000 (0:00:02.710) 0:00:48.921 ********* 2026-03-17 00:52:05.079877 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-03-17 00:52:05.079884 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-03-17 00:52:05.079893 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:52:05.079901 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-03-17 00:52:05.079907 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:52:05.079914 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:52:05.079920 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-03-17 00:52:05.079926 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-03-17 00:52:05.079932 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-03-17 00:52:05.079939 | orchestrator | 2026-03-17 00:52:05.079945 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-03-17 00:52:05.079951 | orchestrator | Tuesday 17 March 2026 00:51:56 +0000 (0:00:04.606) 0:00:53.528 ********* 2026-03-17 00:52:05.079956 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:52:05.079962 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:52:05.079968 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:52:05.079996 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:52:05.080002 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:52:05.080007 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:52:05.080013 | orchestrator | 2026-03-17 00:52:05.080019 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:52:05.080033 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-17 00:52:05.080040 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-17 00:52:05.080046 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-17 00:52:05.080052 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-17 00:52:05.080058 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-17 00:52:05.080071 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-17 00:52:05.080076 | orchestrator | 2026-03-17 00:52:05.080082 | orchestrator | 2026-03-17 00:52:05.080088 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:52:05.080094 | orchestrator | Tuesday 17 March 2026 00:52:04 +0000 (0:00:08.075) 0:01:01.603 ********* 2026-03-17 00:52:05.080100 | orchestrator | =============================================================================== 2026-03-17 00:52:05.080106 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 16.76s 2026-03-17 00:52:05.080112 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 9.96s 2026-03-17 00:52:05.080118 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 8.34s 2026-03-17 00:52:05.080125 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 4.61s 2026-03-17 00:52:05.080130 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.26s 2026-03-17 00:52:05.080137 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.71s 2026-03-17 00:52:05.080148 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 2.62s 2026-03-17 00:52:05.080154 | orchestrator | module-load : Load modules ---------------------------------------------- 1.61s 2026-03-17 00:52:05.080160 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.60s 2026-03-17 00:52:05.080165 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.44s 2026-03-17 00:52:05.080171 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.44s 2026-03-17 00:52:05.080177 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.34s 2026-03-17 00:52:05.080183 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.25s 2026-03-17 00:52:05.080190 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.17s 2026-03-17 00:52:05.080196 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.15s 2026-03-17 00:52:05.080202 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.82s 2026-03-17 00:52:05.080207 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.62s 2026-03-17 00:52:05.080213 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.61s 2026-03-17 00:52:05.080220 | orchestrator | 2026-03-17 00:52:05 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:52:05.080309 | orchestrator | 2026-03-17 00:52:05 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:52:05.080319 | orchestrator | 2026-03-17 00:52:05 | INFO  | Task 0eb0c39d-0581-4072-9003-ea0bc985535f is in state STARTED 2026-03-17 00:52:05.080326 | orchestrator | 2026-03-17 00:52:05 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:52:08.105908 | orchestrator | 2026-03-17 00:52:08 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:52:08.107275 | orchestrator | 2026-03-17 00:52:08 | INFO  | Task 9cd5a014-a6b4-4e32-8d43-2b490d06edae is in state STARTED 2026-03-17 00:52:08.108030 | orchestrator | 2026-03-17 00:52:08 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:52:08.109694 | orchestrator | 2026-03-17 00:52:08 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:52:08.111817 | orchestrator | 2026-03-17 00:52:08 | INFO  | Task 0eb0c39d-0581-4072-9003-ea0bc985535f is in state STARTED 2026-03-17 00:52:08.111886 | orchestrator | 2026-03-17 00:52:08 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:52:11.147344 | orchestrator | 2026-03-17 00:52:11 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:52:11.147424 | orchestrator | 2026-03-17 00:52:11 | INFO  | Task 9cd5a014-a6b4-4e32-8d43-2b490d06edae is in state STARTED 2026-03-17 00:52:11.148070 | orchestrator | 2026-03-17 00:52:11 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:52:11.148764 | orchestrator | 2026-03-17 00:52:11 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:52:11.149478 | orchestrator | 2026-03-17 00:52:11 | INFO  | Task 0eb0c39d-0581-4072-9003-ea0bc985535f is in state STARTED 2026-03-17 00:52:11.149519 | orchestrator | 2026-03-17 00:52:11 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:52:14.234247 | orchestrator | 2026-03-17 00:52:14 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:52:14.234525 | orchestrator | 2026-03-17 00:52:14 | INFO  | Task 9cd5a014-a6b4-4e32-8d43-2b490d06edae is in state STARTED 2026-03-17 00:52:14.235322 | orchestrator | 2026-03-17 00:52:14 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:52:14.235809 | orchestrator | 2026-03-17 00:52:14 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:52:14.236437 | orchestrator | 2026-03-17 00:52:14 | INFO  | Task 0eb0c39d-0581-4072-9003-ea0bc985535f is in state STARTED 2026-03-17 00:52:14.236472 | orchestrator | 2026-03-17 00:52:14 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:52:17.281575 | orchestrator | 2026-03-17 00:52:17 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:52:17.283311 | orchestrator | 2026-03-17 00:52:17 | INFO  | Task 9cd5a014-a6b4-4e32-8d43-2b490d06edae is in state STARTED 2026-03-17 00:52:17.286319 | orchestrator | 2026-03-17 00:52:17 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:52:17.288572 | orchestrator | 2026-03-17 00:52:17 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:52:17.289604 | orchestrator | 2026-03-17 00:52:17 | INFO  | Task 0eb0c39d-0581-4072-9003-ea0bc985535f is in state STARTED 2026-03-17 00:52:17.289691 | orchestrator | 2026-03-17 00:52:17 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:52:20.427718 | orchestrator | 2026-03-17 00:52:20 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:52:20.428088 | orchestrator | 2026-03-17 00:52:20 | INFO  | Task 9cd5a014-a6b4-4e32-8d43-2b490d06edae is in state STARTED 2026-03-17 00:52:20.428852 | orchestrator | 2026-03-17 00:52:20 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:52:20.429480 | orchestrator | 2026-03-17 00:52:20 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:52:20.430370 | orchestrator | 2026-03-17 00:52:20 | INFO  | Task 0eb0c39d-0581-4072-9003-ea0bc985535f is in state STARTED 2026-03-17 00:52:20.430420 | orchestrator | 2026-03-17 00:52:20 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:52:23.592061 | orchestrator | 2026-03-17 00:52:23 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:52:23.592129 | orchestrator | 2026-03-17 00:52:23 | INFO  | Task 9cd5a014-a6b4-4e32-8d43-2b490d06edae is in state STARTED 2026-03-17 00:52:23.592135 | orchestrator | 2026-03-17 00:52:23 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:52:23.592139 | orchestrator | 2026-03-17 00:52:23 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:52:23.592143 | orchestrator | 2026-03-17 00:52:23 | INFO  | Task 0eb0c39d-0581-4072-9003-ea0bc985535f is in state STARTED 2026-03-17 00:52:23.592147 | orchestrator | 2026-03-17 00:52:23 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:52:26.556395 | orchestrator | 2026-03-17 00:52:26 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:52:26.559218 | orchestrator | 2026-03-17 00:52:26 | INFO  | Task 9cd5a014-a6b4-4e32-8d43-2b490d06edae is in state STARTED 2026-03-17 00:52:26.559370 | orchestrator | 2026-03-17 00:52:26 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:52:26.559447 | orchestrator | 2026-03-17 00:52:26 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:52:26.560440 | orchestrator | 2026-03-17 00:52:26 | INFO  | Task 0eb0c39d-0581-4072-9003-ea0bc985535f is in state STARTED 2026-03-17 00:52:26.560491 | orchestrator | 2026-03-17 00:52:26 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:52:29.598436 | orchestrator | 2026-03-17 00:52:29 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:52:29.599600 | orchestrator | 2026-03-17 00:52:29 | INFO  | Task 9cd5a014-a6b4-4e32-8d43-2b490d06edae is in state STARTED 2026-03-17 00:52:29.601057 | orchestrator | 2026-03-17 00:52:29 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:52:29.602127 | orchestrator | 2026-03-17 00:52:29 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:52:29.603297 | orchestrator | 2026-03-17 00:52:29 | INFO  | Task 0eb0c39d-0581-4072-9003-ea0bc985535f is in state STARTED 2026-03-17 00:52:29.603351 | orchestrator | 2026-03-17 00:52:29 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:52:32.631594 | orchestrator | 2026-03-17 00:52:32 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:52:32.631955 | orchestrator | 2026-03-17 00:52:32 | INFO  | Task 9cd5a014-a6b4-4e32-8d43-2b490d06edae is in state STARTED 2026-03-17 00:52:32.633167 | orchestrator | 2026-03-17 00:52:32 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:52:32.635860 | orchestrator | 2026-03-17 00:52:32 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:52:32.638610 | orchestrator | 2026-03-17 00:52:32 | INFO  | Task 0eb0c39d-0581-4072-9003-ea0bc985535f is in state STARTED 2026-03-17 00:52:32.638676 | orchestrator | 2026-03-17 00:52:32 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:52:35.671117 | orchestrator | 2026-03-17 00:52:35 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:52:35.671651 | orchestrator | 2026-03-17 00:52:35 | INFO  | Task 9cd5a014-a6b4-4e32-8d43-2b490d06edae is in state STARTED 2026-03-17 00:52:35.672547 | orchestrator | 2026-03-17 00:52:35 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:52:35.673292 | orchestrator | 2026-03-17 00:52:35 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:52:35.674076 | orchestrator | 2026-03-17 00:52:35 | INFO  | Task 0eb0c39d-0581-4072-9003-ea0bc985535f is in state STARTED 2026-03-17 00:52:35.674110 | orchestrator | 2026-03-17 00:52:35 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:52:38.709975 | orchestrator | 2026-03-17 00:52:38 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:52:38.711774 | orchestrator | 2026-03-17 00:52:38 | INFO  | Task 9cd5a014-a6b4-4e32-8d43-2b490d06edae is in state STARTED 2026-03-17 00:52:38.712857 | orchestrator | 2026-03-17 00:52:38 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:52:38.714600 | orchestrator | 2026-03-17 00:52:38 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:52:38.714701 | orchestrator | 2026-03-17 00:52:38 | INFO  | Task 0eb0c39d-0581-4072-9003-ea0bc985535f is in state STARTED 2026-03-17 00:52:38.714712 | orchestrator | 2026-03-17 00:52:38 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:52:41.754208 | orchestrator | 2026-03-17 00:52:41 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:52:41.754263 | orchestrator | 2026-03-17 00:52:41 | INFO  | Task 9cd5a014-a6b4-4e32-8d43-2b490d06edae is in state STARTED 2026-03-17 00:52:41.754268 | orchestrator | 2026-03-17 00:52:41 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:52:41.754272 | orchestrator | 2026-03-17 00:52:41 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:52:41.754275 | orchestrator | 2026-03-17 00:52:41 | INFO  | Task 0eb0c39d-0581-4072-9003-ea0bc985535f is in state STARTED 2026-03-17 00:52:41.754279 | orchestrator | 2026-03-17 00:52:41 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:52:44.783664 | orchestrator | 2026-03-17 00:52:44 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:52:44.784427 | orchestrator | 2026-03-17 00:52:44 | INFO  | Task 9cd5a014-a6b4-4e32-8d43-2b490d06edae is in state STARTED 2026-03-17 00:52:44.785272 | orchestrator | 2026-03-17 00:52:44 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:52:44.786203 | orchestrator | 2026-03-17 00:52:44 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:52:44.787209 | orchestrator | 2026-03-17 00:52:44 | INFO  | Task 0eb0c39d-0581-4072-9003-ea0bc985535f is in state STARTED 2026-03-17 00:52:44.787282 | orchestrator | 2026-03-17 00:52:44 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:52:47.841118 | orchestrator | 2026-03-17 00:52:47 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:52:47.841986 | orchestrator | 2026-03-17 00:52:47 | INFO  | Task 9cd5a014-a6b4-4e32-8d43-2b490d06edae is in state STARTED 2026-03-17 00:52:47.843141 | orchestrator | 2026-03-17 00:52:47 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:52:47.844528 | orchestrator | 2026-03-17 00:52:47 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:52:47.847510 | orchestrator | 2026-03-17 00:52:47 | INFO  | Task 0eb0c39d-0581-4072-9003-ea0bc985535f is in state STARTED 2026-03-17 00:52:47.847544 | orchestrator | 2026-03-17 00:52:47 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:52:50.895657 | orchestrator | 2026-03-17 00:52:50 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:52:50.898637 | orchestrator | 2026-03-17 00:52:50 | INFO  | Task 9cd5a014-a6b4-4e32-8d43-2b490d06edae is in state STARTED 2026-03-17 00:52:50.899324 | orchestrator | 2026-03-17 00:52:50 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:52:50.899904 | orchestrator | 2026-03-17 00:52:50 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:52:50.902293 | orchestrator | 2026-03-17 00:52:50 | INFO  | Task 0eb0c39d-0581-4072-9003-ea0bc985535f is in state STARTED 2026-03-17 00:52:50.902362 | orchestrator | 2026-03-17 00:52:50 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:52:53.961156 | orchestrator | 2026-03-17 00:52:53 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:52:53.961216 | orchestrator | 2026-03-17 00:52:53 | INFO  | Task 9cd5a014-a6b4-4e32-8d43-2b490d06edae is in state STARTED 2026-03-17 00:52:53.961719 | orchestrator | 2026-03-17 00:52:53 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:52:53.962524 | orchestrator | 2026-03-17 00:52:53 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:52:53.965806 | orchestrator | 2026-03-17 00:52:53 | INFO  | Task 0eb0c39d-0581-4072-9003-ea0bc985535f is in state STARTED 2026-03-17 00:52:53.965857 | orchestrator | 2026-03-17 00:52:53 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:52:57.010712 | orchestrator | 2026-03-17 00:52:57 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:52:57.012851 | orchestrator | 2026-03-17 00:52:57 | INFO  | Task 9cd5a014-a6b4-4e32-8d43-2b490d06edae is in state STARTED 2026-03-17 00:52:57.013762 | orchestrator | 2026-03-17 00:52:57 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:52:57.015229 | orchestrator | 2026-03-17 00:52:57 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:52:57.016372 | orchestrator | 2026-03-17 00:52:57 | INFO  | Task 0eb0c39d-0581-4072-9003-ea0bc985535f is in state STARTED 2026-03-17 00:52:57.016567 | orchestrator | 2026-03-17 00:52:57 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:53:00.052624 | orchestrator | 2026-03-17 00:53:00 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:53:00.054630 | orchestrator | 2026-03-17 00:53:00 | INFO  | Task 9cd5a014-a6b4-4e32-8d43-2b490d06edae is in state STARTED 2026-03-17 00:53:00.055763 | orchestrator | 2026-03-17 00:53:00 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:53:00.057314 | orchestrator | 2026-03-17 00:53:00 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:53:00.058867 | orchestrator | 2026-03-17 00:53:00 | INFO  | Task 0eb0c39d-0581-4072-9003-ea0bc985535f is in state STARTED 2026-03-17 00:53:00.058907 | orchestrator | 2026-03-17 00:53:00 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:53:03.098982 | orchestrator | 2026-03-17 00:53:03 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:53:03.099868 | orchestrator | 2026-03-17 00:53:03 | INFO  | Task 9cd5a014-a6b4-4e32-8d43-2b490d06edae is in state STARTED 2026-03-17 00:53:03.101131 | orchestrator | 2026-03-17 00:53:03 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:53:03.101734 | orchestrator | 2026-03-17 00:53:03 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:53:03.102458 | orchestrator | 2026-03-17 00:53:03 | INFO  | Task 0eb0c39d-0581-4072-9003-ea0bc985535f is in state STARTED 2026-03-17 00:53:03.102499 | orchestrator | 2026-03-17 00:53:03 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:53:06.142972 | orchestrator | 2026-03-17 00:53:06 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:53:06.144965 | orchestrator | 2026-03-17 00:53:06 | INFO  | Task 9cd5a014-a6b4-4e32-8d43-2b490d06edae is in state STARTED 2026-03-17 00:53:06.148672 | orchestrator | 2026-03-17 00:53:06 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:53:06.148712 | orchestrator | 2026-03-17 00:53:06 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:53:06.149405 | orchestrator | 2026-03-17 00:53:06 | INFO  | Task 0eb0c39d-0581-4072-9003-ea0bc985535f is in state STARTED 2026-03-17 00:53:06.149657 | orchestrator | 2026-03-17 00:53:06 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:53:09.179540 | orchestrator | 2026-03-17 00:53:09 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:53:09.181687 | orchestrator | 2026-03-17 00:53:09 | INFO  | Task 9cd5a014-a6b4-4e32-8d43-2b490d06edae is in state STARTED 2026-03-17 00:53:09.183885 | orchestrator | 2026-03-17 00:53:09 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:53:09.186347 | orchestrator | 2026-03-17 00:53:09 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:53:09.188042 | orchestrator | 2026-03-17 00:53:09 | INFO  | Task 0eb0c39d-0581-4072-9003-ea0bc985535f is in state STARTED 2026-03-17 00:53:09.188119 | orchestrator | 2026-03-17 00:53:09 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:53:12.220318 | orchestrator | 2026-03-17 00:53:12 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:53:12.220750 | orchestrator | 2026-03-17 00:53:12 | INFO  | Task 9cd5a014-a6b4-4e32-8d43-2b490d06edae is in state STARTED 2026-03-17 00:53:12.221261 | orchestrator | 2026-03-17 00:53:12 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:53:12.222079 | orchestrator | 2026-03-17 00:53:12 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:53:12.222793 | orchestrator | 2026-03-17 00:53:12 | INFO  | Task 0eb0c39d-0581-4072-9003-ea0bc985535f is in state STARTED 2026-03-17 00:53:12.222813 | orchestrator | 2026-03-17 00:53:12 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:53:15.246202 | orchestrator | 2026-03-17 00:53:15 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:53:15.246844 | orchestrator | 2026-03-17 00:53:15 | INFO  | Task 9cd5a014-a6b4-4e32-8d43-2b490d06edae is in state STARTED 2026-03-17 00:53:15.247819 | orchestrator | 2026-03-17 00:53:15 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:53:15.248574 | orchestrator | 2026-03-17 00:53:15 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:53:15.249211 | orchestrator | 2026-03-17 00:53:15 | INFO  | Task 0eb0c39d-0581-4072-9003-ea0bc985535f is in state STARTED 2026-03-17 00:53:15.249265 | orchestrator | 2026-03-17 00:53:15 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:53:18.286279 | orchestrator | 2026-03-17 00:53:18 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:53:18.287233 | orchestrator | 2026-03-17 00:53:18 | INFO  | Task 9cd5a014-a6b4-4e32-8d43-2b490d06edae is in state STARTED 2026-03-17 00:53:18.288222 | orchestrator | 2026-03-17 00:53:18 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:53:18.289112 | orchestrator | 2026-03-17 00:53:18 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:53:18.290158 | orchestrator | 2026-03-17 00:53:18 | INFO  | Task 0eb0c39d-0581-4072-9003-ea0bc985535f is in state STARTED 2026-03-17 00:53:18.290185 | orchestrator | 2026-03-17 00:53:18 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:53:21.331309 | orchestrator | 2026-03-17 00:53:21 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:53:21.331379 | orchestrator | 2026-03-17 00:53:21 | INFO  | Task 9cd5a014-a6b4-4e32-8d43-2b490d06edae is in state STARTED 2026-03-17 00:53:21.331388 | orchestrator | 2026-03-17 00:53:21 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:53:21.332339 | orchestrator | 2026-03-17 00:53:21 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:53:21.332387 | orchestrator | 2026-03-17 00:53:21 | INFO  | Task 0eb0c39d-0581-4072-9003-ea0bc985535f is in state STARTED 2026-03-17 00:53:21.332611 | orchestrator | 2026-03-17 00:53:21 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:53:24.370168 | orchestrator | 2026-03-17 00:53:24 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:53:24.371694 | orchestrator | 2026-03-17 00:53:24 | INFO  | Task 9cd5a014-a6b4-4e32-8d43-2b490d06edae is in state STARTED 2026-03-17 00:53:24.372560 | orchestrator | 2026-03-17 00:53:24 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:53:24.374569 | orchestrator | 2026-03-17 00:53:24 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:53:24.375550 | orchestrator | 2026-03-17 00:53:24 | INFO  | Task 0eb0c39d-0581-4072-9003-ea0bc985535f is in state STARTED 2026-03-17 00:53:24.375580 | orchestrator | 2026-03-17 00:53:24 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:53:27.420955 | orchestrator | 2026-03-17 00:53:27 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:53:27.422242 | orchestrator | 2026-03-17 00:53:27 | INFO  | Task 9cd5a014-a6b4-4e32-8d43-2b490d06edae is in state STARTED 2026-03-17 00:53:27.423387 | orchestrator | 2026-03-17 00:53:27 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:53:27.424436 | orchestrator | 2026-03-17 00:53:27 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:53:27.425804 | orchestrator | 2026-03-17 00:53:27 | INFO  | Task 0eb0c39d-0581-4072-9003-ea0bc985535f is in state SUCCESS 2026-03-17 00:53:27.425843 | orchestrator | 2026-03-17 00:53:27.427277 | orchestrator | 2026-03-17 00:53:27.427319 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2026-03-17 00:53:27.427333 | orchestrator | 2026-03-17 00:53:27.427346 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-03-17 00:53:27.427354 | orchestrator | Tuesday 17 March 2026 00:51:17 +0000 (0:00:00.174) 0:00:00.174 ********* 2026-03-17 00:53:27.427362 | orchestrator | ok: [localhost] => { 2026-03-17 00:53:27.427386 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2026-03-17 00:53:27.427395 | orchestrator | } 2026-03-17 00:53:27.427402 | orchestrator | 2026-03-17 00:53:27.427409 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2026-03-17 00:53:27.427417 | orchestrator | Tuesday 17 March 2026 00:51:17 +0000 (0:00:00.041) 0:00:00.215 ********* 2026-03-17 00:53:27.427425 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2026-03-17 00:53:27.427434 | orchestrator | ...ignoring 2026-03-17 00:53:27.427461 | orchestrator | 2026-03-17 00:53:27.427469 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2026-03-17 00:53:27.427476 | orchestrator | Tuesday 17 March 2026 00:51:21 +0000 (0:00:03.239) 0:00:03.455 ********* 2026-03-17 00:53:27.427483 | orchestrator | skipping: [localhost] 2026-03-17 00:53:27.427490 | orchestrator | 2026-03-17 00:53:27.427497 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2026-03-17 00:53:27.427504 | orchestrator | Tuesday 17 March 2026 00:51:21 +0000 (0:00:00.134) 0:00:03.589 ********* 2026-03-17 00:53:27.427511 | orchestrator | ok: [localhost] 2026-03-17 00:53:27.427518 | orchestrator | 2026-03-17 00:53:27.427525 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-17 00:53:27.427533 | orchestrator | 2026-03-17 00:53:27.427558 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-17 00:53:27.427566 | orchestrator | Tuesday 17 March 2026 00:51:22 +0000 (0:00:01.113) 0:00:04.703 ********* 2026-03-17 00:53:27.427573 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:53:27.427580 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:53:27.427587 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:53:27.427594 | orchestrator | 2026-03-17 00:53:27.427601 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-17 00:53:27.427608 | orchestrator | Tuesday 17 March 2026 00:51:23 +0000 (0:00:00.664) 0:00:05.368 ********* 2026-03-17 00:53:27.427615 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-03-17 00:53:27.427623 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-03-17 00:53:27.427630 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-03-17 00:53:27.427637 | orchestrator | 2026-03-17 00:53:27.427644 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-03-17 00:53:27.427651 | orchestrator | 2026-03-17 00:53:27.427658 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-17 00:53:27.427665 | orchestrator | Tuesday 17 March 2026 00:51:23 +0000 (0:00:00.407) 0:00:05.775 ********* 2026-03-17 00:53:27.427673 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:53:27.427679 | orchestrator | 2026-03-17 00:53:27.427685 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-03-17 00:53:27.427691 | orchestrator | Tuesday 17 March 2026 00:51:24 +0000 (0:00:00.769) 0:00:06.545 ********* 2026-03-17 00:53:27.427698 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:53:27.427705 | orchestrator | 2026-03-17 00:53:27.427712 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-03-17 00:53:27.427720 | orchestrator | Tuesday 17 March 2026 00:51:25 +0000 (0:00:01.565) 0:00:08.110 ********* 2026-03-17 00:53:27.427727 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:53:27.427735 | orchestrator | 2026-03-17 00:53:27.427742 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-03-17 00:53:27.427749 | orchestrator | Tuesday 17 March 2026 00:51:26 +0000 (0:00:00.365) 0:00:08.476 ********* 2026-03-17 00:53:27.427756 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:53:27.427763 | orchestrator | 2026-03-17 00:53:27.427770 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-03-17 00:53:27.427777 | orchestrator | Tuesday 17 March 2026 00:51:26 +0000 (0:00:00.613) 0:00:09.090 ********* 2026-03-17 00:53:27.427784 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:53:27.427791 | orchestrator | 2026-03-17 00:53:27.427798 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-03-17 00:53:27.427805 | orchestrator | Tuesday 17 March 2026 00:51:27 +0000 (0:00:00.317) 0:00:09.408 ********* 2026-03-17 00:53:27.427812 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:53:27.427819 | orchestrator | 2026-03-17 00:53:27.427826 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-17 00:53:27.427833 | orchestrator | Tuesday 17 March 2026 00:51:27 +0000 (0:00:00.322) 0:00:09.730 ********* 2026-03-17 00:53:27.427846 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:53:27.427854 | orchestrator | 2026-03-17 00:53:27.427860 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-03-17 00:53:27.427867 | orchestrator | Tuesday 17 March 2026 00:51:28 +0000 (0:00:00.751) 0:00:10.482 ********* 2026-03-17 00:53:27.427875 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:53:27.427882 | orchestrator | 2026-03-17 00:53:27.427889 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-03-17 00:53:27.427896 | orchestrator | Tuesday 17 March 2026 00:51:29 +0000 (0:00:00.921) 0:00:11.404 ********* 2026-03-17 00:53:27.427903 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:53:27.427910 | orchestrator | 2026-03-17 00:53:27.427917 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-03-17 00:53:27.427924 | orchestrator | Tuesday 17 March 2026 00:51:29 +0000 (0:00:00.587) 0:00:11.991 ********* 2026-03-17 00:53:27.427931 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:53:27.427938 | orchestrator | 2026-03-17 00:53:27.427962 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-03-17 00:53:27.427970 | orchestrator | Tuesday 17 March 2026 00:51:30 +0000 (0:00:00.364) 0:00:12.355 ********* 2026-03-17 00:53:27.427986 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-17 00:53:27.427998 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-17 00:53:27.428006 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-17 00:53:27.428020 | orchestrator | 2026-03-17 00:53:27.428028 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-03-17 00:53:27.428035 | orchestrator | Tuesday 17 March 2026 00:51:31 +0000 (0:00:01.301) 0:00:13.657 ********* 2026-03-17 00:53:27.428052 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-17 00:53:27.428133 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-17 00:53:27.428143 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-17 00:53:27.428151 | orchestrator | 2026-03-17 00:53:27.428158 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-03-17 00:53:27.428171 | orchestrator | Tuesday 17 March 2026 00:51:33 +0000 (0:00:02.049) 0:00:15.706 ********* 2026-03-17 00:53:27.428178 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-17 00:53:27.428186 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-17 00:53:27.428193 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-17 00:53:27.428200 | orchestrator | 2026-03-17 00:53:27.428207 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-03-17 00:53:27.428214 | orchestrator | Tuesday 17 March 2026 00:51:35 +0000 (0:00:01.708) 0:00:17.415 ********* 2026-03-17 00:53:27.428221 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-17 00:53:27.428227 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-17 00:53:27.428235 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-17 00:53:27.428241 | orchestrator | 2026-03-17 00:53:27.428248 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-03-17 00:53:27.428255 | orchestrator | Tuesday 17 March 2026 00:51:37 +0000 (0:00:02.715) 0:00:20.130 ********* 2026-03-17 00:53:27.428263 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-17 00:53:27.428269 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-17 00:53:27.428277 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-17 00:53:27.428283 | orchestrator | 2026-03-17 00:53:27.428290 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-03-17 00:53:27.428296 | orchestrator | Tuesday 17 March 2026 00:51:39 +0000 (0:00:01.435) 0:00:21.565 ********* 2026-03-17 00:53:27.428306 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-17 00:53:27.428312 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-17 00:53:27.428319 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-17 00:53:27.428326 | orchestrator | 2026-03-17 00:53:27.428332 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-03-17 00:53:27.428343 | orchestrator | Tuesday 17 March 2026 00:51:41 +0000 (0:00:01.879) 0:00:23.444 ********* 2026-03-17 00:53:27.428349 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-17 00:53:27.428355 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-17 00:53:27.428361 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-17 00:53:27.428367 | orchestrator | 2026-03-17 00:53:27.428373 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-03-17 00:53:27.428379 | orchestrator | Tuesday 17 March 2026 00:51:42 +0000 (0:00:01.739) 0:00:25.183 ********* 2026-03-17 00:53:27.428386 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-17 00:53:27.428392 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-17 00:53:27.428399 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-17 00:53:27.428406 | orchestrator | 2026-03-17 00:53:27.428412 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-17 00:53:27.428419 | orchestrator | Tuesday 17 March 2026 00:51:44 +0000 (0:00:01.730) 0:00:26.914 ********* 2026-03-17 00:53:27.428425 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:53:27.428432 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:53:27.428445 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:53:27.428452 | orchestrator | 2026-03-17 00:53:27.428459 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2026-03-17 00:53:27.428465 | orchestrator | Tuesday 17 March 2026 00:51:45 +0000 (0:00:00.427) 0:00:27.341 ********* 2026-03-17 00:53:27.428472 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-17 00:53:27.428480 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-17 00:53:27.428499 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-17 00:53:27.428506 | orchestrator | 2026-03-17 00:53:27.428513 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-03-17 00:53:27.428520 | orchestrator | Tuesday 17 March 2026 00:51:46 +0000 (0:00:01.264) 0:00:28.606 ********* 2026-03-17 00:53:27.428527 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:53:27.428534 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:53:27.428540 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:53:27.428547 | orchestrator | 2026-03-17 00:53:27.428553 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-03-17 00:53:27.428566 | orchestrator | Tuesday 17 March 2026 00:51:47 +0000 (0:00:00.982) 0:00:29.588 ********* 2026-03-17 00:53:27.428573 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:53:27.428579 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:53:27.428586 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:53:27.428592 | orchestrator | 2026-03-17 00:53:27.428599 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-03-17 00:53:27.428605 | orchestrator | Tuesday 17 March 2026 00:51:54 +0000 (0:00:07.207) 0:00:36.796 ********* 2026-03-17 00:53:27.428612 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:53:27.428619 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:53:27.428625 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:53:27.428632 | orchestrator | 2026-03-17 00:53:27.428639 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-17 00:53:27.428647 | orchestrator | 2026-03-17 00:53:27.428654 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-17 00:53:27.428661 | orchestrator | Tuesday 17 March 2026 00:51:54 +0000 (0:00:00.455) 0:00:37.252 ********* 2026-03-17 00:53:27.428668 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:53:27.428675 | orchestrator | 2026-03-17 00:53:27.428682 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-17 00:53:27.428688 | orchestrator | Tuesday 17 March 2026 00:51:55 +0000 (0:00:00.875) 0:00:38.128 ********* 2026-03-17 00:53:27.428695 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:53:27.428702 | orchestrator | 2026-03-17 00:53:27.428708 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-17 00:53:27.428715 | orchestrator | Tuesday 17 March 2026 00:51:56 +0000 (0:00:00.212) 0:00:38.340 ********* 2026-03-17 00:53:27.428721 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:53:27.428728 | orchestrator | 2026-03-17 00:53:27.428734 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-17 00:53:27.428740 | orchestrator | Tuesday 17 March 2026 00:52:03 +0000 (0:00:07.296) 0:00:45.637 ********* 2026-03-17 00:53:27.428747 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:53:27.428753 | orchestrator | 2026-03-17 00:53:27.428760 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-17 00:53:27.428766 | orchestrator | 2026-03-17 00:53:27.428773 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-17 00:53:27.428779 | orchestrator | Tuesday 17 March 2026 00:52:51 +0000 (0:00:48.457) 0:01:34.094 ********* 2026-03-17 00:53:27.428786 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:53:27.428793 | orchestrator | 2026-03-17 00:53:27.428800 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-17 00:53:27.428807 | orchestrator | Tuesday 17 March 2026 00:52:52 +0000 (0:00:00.537) 0:01:34.631 ********* 2026-03-17 00:53:27.428813 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:53:27.428820 | orchestrator | 2026-03-17 00:53:27.428826 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-17 00:53:27.428832 | orchestrator | Tuesday 17 March 2026 00:52:52 +0000 (0:00:00.203) 0:01:34.835 ********* 2026-03-17 00:53:27.428839 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:53:27.428845 | orchestrator | 2026-03-17 00:53:27.428852 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-17 00:53:27.428858 | orchestrator | Tuesday 17 March 2026 00:52:54 +0000 (0:00:01.927) 0:01:36.762 ********* 2026-03-17 00:53:27.428865 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:53:27.428871 | orchestrator | 2026-03-17 00:53:27.428878 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-17 00:53:27.428885 | orchestrator | 2026-03-17 00:53:27.428891 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-17 00:53:27.428898 | orchestrator | Tuesday 17 March 2026 00:53:07 +0000 (0:00:12.820) 0:01:49.582 ********* 2026-03-17 00:53:27.428905 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:53:27.428911 | orchestrator | 2026-03-17 00:53:27.428924 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-17 00:53:27.428931 | orchestrator | Tuesday 17 March 2026 00:53:07 +0000 (0:00:00.618) 0:01:50.201 ********* 2026-03-17 00:53:27.428937 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:53:27.428943 | orchestrator | 2026-03-17 00:53:27.428949 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-17 00:53:27.428956 | orchestrator | Tuesday 17 March 2026 00:53:08 +0000 (0:00:00.258) 0:01:50.460 ********* 2026-03-17 00:53:27.428963 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:53:27.428970 | orchestrator | 2026-03-17 00:53:27.428977 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-17 00:53:27.428989 | orchestrator | Tuesday 17 March 2026 00:53:09 +0000 (0:00:01.527) 0:01:51.987 ********* 2026-03-17 00:53:27.428996 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:53:27.429003 | orchestrator | 2026-03-17 00:53:27.429010 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-03-17 00:53:27.429016 | orchestrator | 2026-03-17 00:53:27.429022 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-03-17 00:53:27.429029 | orchestrator | Tuesday 17 March 2026 00:53:20 +0000 (0:00:11.232) 0:02:03.219 ********* 2026-03-17 00:53:27.429038 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:53:27.429044 | orchestrator | 2026-03-17 00:53:27.429050 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-03-17 00:53:27.429056 | orchestrator | Tuesday 17 March 2026 00:53:21 +0000 (0:00:00.752) 0:02:03.972 ********* 2026-03-17 00:53:27.429085 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:53:27.429091 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:53:27.429097 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:53:27.429103 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-17 00:53:27.429109 | orchestrator | enable_outward_rabbitmq_True 2026-03-17 00:53:27.429115 | orchestrator | 2026-03-17 00:53:27.429120 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2026-03-17 00:53:27.429126 | orchestrator | skipping: no hosts matched 2026-03-17 00:53:27.429132 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-17 00:53:27.429138 | orchestrator | outward_rabbitmq_restart 2026-03-17 00:53:27.429144 | orchestrator | 2026-03-17 00:53:27.429150 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2026-03-17 00:53:27.429156 | orchestrator | skipping: no hosts matched 2026-03-17 00:53:27.429162 | orchestrator | 2026-03-17 00:53:27.429168 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2026-03-17 00:53:27.429174 | orchestrator | skipping: no hosts matched 2026-03-17 00:53:27.429179 | orchestrator | 2026-03-17 00:53:27.429185 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:53:27.429192 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-03-17 00:53:27.429200 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-17 00:53:27.429206 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:53:27.429213 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:53:27.429220 | orchestrator | 2026-03-17 00:53:27.429226 | orchestrator | 2026-03-17 00:53:27.429231 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:53:27.429237 | orchestrator | Tuesday 17 March 2026 00:53:23 +0000 (0:00:02.073) 0:02:06.046 ********* 2026-03-17 00:53:27.429243 | orchestrator | =============================================================================== 2026-03-17 00:53:27.429256 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 72.51s 2026-03-17 00:53:27.429262 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 10.75s 2026-03-17 00:53:27.429268 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 7.21s 2026-03-17 00:53:27.429274 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.24s 2026-03-17 00:53:27.429280 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.72s 2026-03-17 00:53:27.429286 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.07s 2026-03-17 00:53:27.429292 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 2.05s 2026-03-17 00:53:27.429298 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.03s 2026-03-17 00:53:27.429304 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.88s 2026-03-17 00:53:27.429310 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.74s 2026-03-17 00:53:27.429315 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.73s 2026-03-17 00:53:27.429321 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.71s 2026-03-17 00:53:27.429327 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.57s 2026-03-17 00:53:27.429333 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.44s 2026-03-17 00:53:27.429339 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.30s 2026-03-17 00:53:27.429345 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.26s 2026-03-17 00:53:27.429352 | orchestrator | Set kolla_action_rabbitmq = kolla_action_ng ----------------------------- 1.11s 2026-03-17 00:53:27.429358 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 0.98s 2026-03-17 00:53:27.429364 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.92s 2026-03-17 00:53:27.429370 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 0.77s 2026-03-17 00:53:27.429376 | orchestrator | 2026-03-17 00:53:27 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:53:30.457560 | orchestrator | 2026-03-17 00:53:30 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:53:30.460980 | orchestrator | 2026-03-17 00:53:30 | INFO  | Task 9cd5a014-a6b4-4e32-8d43-2b490d06edae is in state STARTED 2026-03-17 00:53:30.462576 | orchestrator | 2026-03-17 00:53:30 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:53:30.465274 | orchestrator | 2026-03-17 00:53:30 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:53:30.465386 | orchestrator | 2026-03-17 00:53:30 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:53:33.503302 | orchestrator | 2026-03-17 00:53:33 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:53:33.505329 | orchestrator | 2026-03-17 00:53:33 | INFO  | Task 9cd5a014-a6b4-4e32-8d43-2b490d06edae is in state STARTED 2026-03-17 00:53:33.507061 | orchestrator | 2026-03-17 00:53:33 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:53:33.509366 | orchestrator | 2026-03-17 00:53:33 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:53:33.509413 | orchestrator | 2026-03-17 00:53:33 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:53:36.556494 | orchestrator | 2026-03-17 00:53:36 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:53:36.556568 | orchestrator | 2026-03-17 00:53:36 | INFO  | Task 9cd5a014-a6b4-4e32-8d43-2b490d06edae is in state STARTED 2026-03-17 00:53:36.558184 | orchestrator | 2026-03-17 00:53:36 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:53:36.559904 | orchestrator | 2026-03-17 00:53:36 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:53:36.559964 | orchestrator | 2026-03-17 00:53:36 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:53:39.605337 | orchestrator | 2026-03-17 00:53:39 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:53:39.605963 | orchestrator | 2026-03-17 00:53:39 | INFO  | Task 9cd5a014-a6b4-4e32-8d43-2b490d06edae is in state STARTED 2026-03-17 00:53:39.607284 | orchestrator | 2026-03-17 00:53:39 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:53:39.608613 | orchestrator | 2026-03-17 00:53:39 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:53:39.608636 | orchestrator | 2026-03-17 00:53:39 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:53:42.652357 | orchestrator | 2026-03-17 00:53:42 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:53:42.653431 | orchestrator | 2026-03-17 00:53:42 | INFO  | Task 9cd5a014-a6b4-4e32-8d43-2b490d06edae is in state STARTED 2026-03-17 00:53:42.654491 | orchestrator | 2026-03-17 00:53:42 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:53:42.655494 | orchestrator | 2026-03-17 00:53:42 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:53:42.655530 | orchestrator | 2026-03-17 00:53:42 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:53:45.695353 | orchestrator | 2026-03-17 00:53:45 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:53:45.695900 | orchestrator | 2026-03-17 00:53:45 | INFO  | Task 9cd5a014-a6b4-4e32-8d43-2b490d06edae is in state STARTED 2026-03-17 00:53:45.697591 | orchestrator | 2026-03-17 00:53:45 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:53:45.698528 | orchestrator | 2026-03-17 00:53:45 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:53:45.698560 | orchestrator | 2026-03-17 00:53:45 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:53:48.743649 | orchestrator | 2026-03-17 00:53:48 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:53:48.744095 | orchestrator | 2026-03-17 00:53:48 | INFO  | Task 9cd5a014-a6b4-4e32-8d43-2b490d06edae is in state STARTED 2026-03-17 00:53:48.746736 | orchestrator | 2026-03-17 00:53:48 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:53:48.747361 | orchestrator | 2026-03-17 00:53:48 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:53:48.747389 | orchestrator | 2026-03-17 00:53:48 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:53:51.796434 | orchestrator | 2026-03-17 00:53:51 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:53:51.797534 | orchestrator | 2026-03-17 00:53:51 | INFO  | Task 9cd5a014-a6b4-4e32-8d43-2b490d06edae is in state STARTED 2026-03-17 00:53:51.798153 | orchestrator | 2026-03-17 00:53:51 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:53:51.800963 | orchestrator | 2026-03-17 00:53:51 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:53:51.801007 | orchestrator | 2026-03-17 00:53:51 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:53:54.845362 | orchestrator | 2026-03-17 00:53:54 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:53:54.845473 | orchestrator | 2026-03-17 00:53:54 | INFO  | Task 9cd5a014-a6b4-4e32-8d43-2b490d06edae is in state STARTED 2026-03-17 00:53:54.845485 | orchestrator | 2026-03-17 00:53:54 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:53:54.847032 | orchestrator | 2026-03-17 00:53:54 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:53:54.847073 | orchestrator | 2026-03-17 00:53:54 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:53:57.889806 | orchestrator | 2026-03-17 00:53:57 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:53:57.891723 | orchestrator | 2026-03-17 00:53:57 | INFO  | Task 9cd5a014-a6b4-4e32-8d43-2b490d06edae is in state STARTED 2026-03-17 00:53:57.895505 | orchestrator | 2026-03-17 00:53:57 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:53:57.896280 | orchestrator | 2026-03-17 00:53:57 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:53:57.896307 | orchestrator | 2026-03-17 00:53:57 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:54:00.944535 | orchestrator | 2026-03-17 00:54:00 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:54:00.944584 | orchestrator | 2026-03-17 00:54:00 | INFO  | Task 9cd5a014-a6b4-4e32-8d43-2b490d06edae is in state STARTED 2026-03-17 00:54:00.944588 | orchestrator | 2026-03-17 00:54:00 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:54:00.944592 | orchestrator | 2026-03-17 00:54:00 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:54:00.944596 | orchestrator | 2026-03-17 00:54:00 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:54:04.023043 | orchestrator | 2026-03-17 00:54:04 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:54:04.027452 | orchestrator | 2026-03-17 00:54:04 | INFO  | Task 9cd5a014-a6b4-4e32-8d43-2b490d06edae is in state STARTED 2026-03-17 00:54:04.027504 | orchestrator | 2026-03-17 00:54:04 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:54:04.084352 | orchestrator | 2026-03-17 00:54:04 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:54:04.084408 | orchestrator | 2026-03-17 00:54:04 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:54:07.122984 | orchestrator | 2026-03-17 00:54:07 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:54:07.123071 | orchestrator | 2026-03-17 00:54:07 | INFO  | Task 9cd5a014-a6b4-4e32-8d43-2b490d06edae is in state STARTED 2026-03-17 00:54:07.123564 | orchestrator | 2026-03-17 00:54:07 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:54:07.124069 | orchestrator | 2026-03-17 00:54:07 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:54:07.124159 | orchestrator | 2026-03-17 00:54:07 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:54:10.155696 | orchestrator | 2026-03-17 00:54:10 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:54:10.158166 | orchestrator | 2026-03-17 00:54:10 | INFO  | Task 9cd5a014-a6b4-4e32-8d43-2b490d06edae is in state STARTED 2026-03-17 00:54:10.158250 | orchestrator | 2026-03-17 00:54:10 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state STARTED 2026-03-17 00:54:10.160674 | orchestrator | 2026-03-17 00:54:10 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:54:10.160765 | orchestrator | 2026-03-17 00:54:10 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:54:13.189594 | orchestrator | 2026-03-17 00:54:13 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:54:13.192004 | orchestrator | 2026-03-17 00:54:13 | INFO  | Task 9cd5a014-a6b4-4e32-8d43-2b490d06edae is in state STARTED 2026-03-17 00:54:13.194759 | orchestrator | 2026-03-17 00:54:13 | INFO  | Task 62331526-f58b-4d28-bde0-2db23ea565fd is in state SUCCESS 2026-03-17 00:54:13.195787 | orchestrator | 2026-03-17 00:54:13.195838 | orchestrator | 2026-03-17 00:54:13.195848 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-03-17 00:54:13.195856 | orchestrator | 2026-03-17 00:54:13.195863 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-03-17 00:54:13.195870 | orchestrator | Tuesday 17 March 2026 00:48:39 +0000 (0:00:00.255) 0:00:00.256 ********* 2026-03-17 00:54:13.195878 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:54:13.195885 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:54:13.195933 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:54:13.195943 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:54:13.195951 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:54:13.195959 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:54:13.195966 | orchestrator | 2026-03-17 00:54:13.195974 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-03-17 00:54:13.195982 | orchestrator | Tuesday 17 March 2026 00:48:40 +0000 (0:00:00.688) 0:00:00.944 ********* 2026-03-17 00:54:13.195989 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:54:13.195997 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:54:13.196005 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:54:13.196012 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:54:13.196020 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:54:13.196027 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:54:13.196035 | orchestrator | 2026-03-17 00:54:13.196042 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-03-17 00:54:13.196050 | orchestrator | Tuesday 17 March 2026 00:48:41 +0000 (0:00:01.073) 0:00:02.017 ********* 2026-03-17 00:54:13.196057 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:54:13.196065 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:54:13.196073 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:54:13.196080 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:54:13.196146 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:54:13.196155 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:54:13.196162 | orchestrator | 2026-03-17 00:54:13.196169 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-03-17 00:54:13.196176 | orchestrator | Tuesday 17 March 2026 00:48:41 +0000 (0:00:00.843) 0:00:02.861 ********* 2026-03-17 00:54:13.196183 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:54:13.196191 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:54:13.196197 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:54:13.196204 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:54:13.196212 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:54:13.196219 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:54:13.196226 | orchestrator | 2026-03-17 00:54:13.196233 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-03-17 00:54:13.196240 | orchestrator | Tuesday 17 March 2026 00:48:43 +0000 (0:00:01.837) 0:00:04.698 ********* 2026-03-17 00:54:13.196247 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:54:13.196254 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:54:13.196261 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:54:13.196269 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:54:13.196276 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:54:13.196283 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:54:13.196289 | orchestrator | 2026-03-17 00:54:13.196297 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-03-17 00:54:13.196319 | orchestrator | Tuesday 17 March 2026 00:48:45 +0000 (0:00:01.734) 0:00:06.433 ********* 2026-03-17 00:54:13.196326 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:54:13.196333 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:54:13.196339 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:54:13.196345 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:54:13.196351 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:54:13.196358 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:54:13.196365 | orchestrator | 2026-03-17 00:54:13.196372 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-03-17 00:54:13.196379 | orchestrator | Tuesday 17 March 2026 00:48:47 +0000 (0:00:01.561) 0:00:07.994 ********* 2026-03-17 00:54:13.196386 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:54:13.196393 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:54:13.196401 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:54:13.196408 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:54:13.196415 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:54:13.196422 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:54:13.196429 | orchestrator | 2026-03-17 00:54:13.196436 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-03-17 00:54:13.196443 | orchestrator | Tuesday 17 March 2026 00:48:48 +0000 (0:00:01.199) 0:00:09.194 ********* 2026-03-17 00:54:13.196451 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:54:13.196457 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:54:13.196463 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:54:13.196469 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:54:13.196477 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:54:13.196484 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:54:13.196491 | orchestrator | 2026-03-17 00:54:13.196499 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-03-17 00:54:13.196506 | orchestrator | Tuesday 17 March 2026 00:48:48 +0000 (0:00:00.556) 0:00:09.751 ********* 2026-03-17 00:54:13.196513 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-17 00:54:13.196521 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-17 00:54:13.196529 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:54:13.196536 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-17 00:54:13.196544 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-17 00:54:13.196552 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:54:13.196559 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-17 00:54:13.196567 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-17 00:54:13.196574 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:54:13.196582 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-17 00:54:13.196601 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-17 00:54:13.196610 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:54:13.196618 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-17 00:54:13.196625 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-17 00:54:13.196633 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:54:13.196641 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-17 00:54:13.196648 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-17 00:54:13.196655 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:54:13.196662 | orchestrator | 2026-03-17 00:54:13.196670 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-03-17 00:54:13.196677 | orchestrator | Tuesday 17 March 2026 00:48:49 +0000 (0:00:00.604) 0:00:10.355 ********* 2026-03-17 00:54:13.196684 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:54:13.196727 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:54:13.196751 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:54:13.196759 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:54:13.196766 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:54:13.196773 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:54:13.196780 | orchestrator | 2026-03-17 00:54:13.196787 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-03-17 00:54:13.196796 | orchestrator | Tuesday 17 March 2026 00:48:50 +0000 (0:00:01.553) 0:00:11.909 ********* 2026-03-17 00:54:13.196803 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:54:13.196809 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:54:13.196816 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:54:13.196823 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:54:13.196830 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:54:13.196836 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:54:13.196843 | orchestrator | 2026-03-17 00:54:13.196851 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-03-17 00:54:13.197252 | orchestrator | Tuesday 17 March 2026 00:48:52 +0000 (0:00:01.218) 0:00:13.127 ********* 2026-03-17 00:54:13.197275 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:54:13.197283 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:54:13.197290 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:54:13.197297 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:54:13.197305 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:54:13.197312 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:54:13.197319 | orchestrator | 2026-03-17 00:54:13.197326 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-03-17 00:54:13.197334 | orchestrator | Tuesday 17 March 2026 00:49:00 +0000 (0:00:07.965) 0:00:21.093 ********* 2026-03-17 00:54:13.197343 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:54:13.197350 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:54:13.197358 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:54:13.197365 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:54:13.197372 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:54:13.197379 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:54:13.197385 | orchestrator | 2026-03-17 00:54:13.197393 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-03-17 00:54:13.197400 | orchestrator | Tuesday 17 March 2026 00:49:01 +0000 (0:00:01.677) 0:00:22.771 ********* 2026-03-17 00:54:13.197406 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:54:13.197413 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:54:13.197420 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:54:13.197427 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:54:13.197434 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:54:13.197441 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:54:13.197448 | orchestrator | 2026-03-17 00:54:13.197455 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-03-17 00:54:13.197461 | orchestrator | Tuesday 17 March 2026 00:49:03 +0000 (0:00:01.693) 0:00:24.464 ********* 2026-03-17 00:54:13.197467 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:54:13.197474 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:54:13.197481 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:54:13.197488 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:54:13.197494 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:54:13.197503 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:54:13.197510 | orchestrator | 2026-03-17 00:54:13.197516 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-03-17 00:54:13.197523 | orchestrator | Tuesday 17 March 2026 00:49:04 +0000 (0:00:01.076) 0:00:25.540 ********* 2026-03-17 00:54:13.197530 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-03-17 00:54:13.197537 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-03-17 00:54:13.197552 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:54:13.197560 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-03-17 00:54:13.197567 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-03-17 00:54:13.197574 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:54:13.197581 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-03-17 00:54:13.197588 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-03-17 00:54:13.197595 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-03-17 00:54:13.197602 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-03-17 00:54:13.197609 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-03-17 00:54:13.197616 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-03-17 00:54:13.197623 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:54:13.197630 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:54:13.197637 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:54:13.197644 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-03-17 00:54:13.197651 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-03-17 00:54:13.197658 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:54:13.197665 | orchestrator | 2026-03-17 00:54:13.197673 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-03-17 00:54:13.197691 | orchestrator | Tuesday 17 March 2026 00:49:05 +0000 (0:00:00.617) 0:00:26.157 ********* 2026-03-17 00:54:13.197699 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:54:13.197706 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:54:13.197713 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:54:13.197720 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:54:13.197727 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:54:13.197734 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:54:13.197741 | orchestrator | 2026-03-17 00:54:13.197748 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-03-17 00:54:13.197755 | orchestrator | Tuesday 17 March 2026 00:49:06 +0000 (0:00:00.779) 0:00:26.937 ********* 2026-03-17 00:54:13.197762 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:54:13.197769 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:54:13.197777 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:54:13.197784 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:54:13.197791 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:54:13.197798 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:54:13.197805 | orchestrator | 2026-03-17 00:54:13.197812 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-03-17 00:54:13.197819 | orchestrator | 2026-03-17 00:54:13.197827 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-03-17 00:54:13.197834 | orchestrator | Tuesday 17 March 2026 00:49:07 +0000 (0:00:01.544) 0:00:28.481 ********* 2026-03-17 00:54:13.197840 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:54:13.197847 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:54:13.197855 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:54:13.197862 | orchestrator | 2026-03-17 00:54:13.197869 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-03-17 00:54:13.197876 | orchestrator | Tuesday 17 March 2026 00:49:08 +0000 (0:00:01.097) 0:00:29.579 ********* 2026-03-17 00:54:13.197883 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:54:13.197890 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:54:13.197897 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:54:13.197904 | orchestrator | 2026-03-17 00:54:13.197911 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-03-17 00:54:13.197918 | orchestrator | Tuesday 17 March 2026 00:49:10 +0000 (0:00:01.381) 0:00:30.960 ********* 2026-03-17 00:54:13.197926 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:54:13.197932 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:54:13.197940 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:54:13.197947 | orchestrator | 2026-03-17 00:54:13.197958 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-03-17 00:54:13.197965 | orchestrator | Tuesday 17 March 2026 00:49:11 +0000 (0:00:01.256) 0:00:32.216 ********* 2026-03-17 00:54:13.197973 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:54:13.197980 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:54:13.197987 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:54:13.197994 | orchestrator | 2026-03-17 00:54:13.198005 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-03-17 00:54:13.198058 | orchestrator | Tuesday 17 March 2026 00:49:12 +0000 (0:00:01.359) 0:00:33.575 ********* 2026-03-17 00:54:13.198068 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:54:13.198075 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:54:13.198082 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:54:13.198100 | orchestrator | 2026-03-17 00:54:13.198106 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-03-17 00:54:13.198113 | orchestrator | Tuesday 17 March 2026 00:49:13 +0000 (0:00:00.431) 0:00:34.007 ********* 2026-03-17 00:54:13.198120 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:54:13.198127 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:54:13.198134 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:54:13.198141 | orchestrator | 2026-03-17 00:54:13.198148 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-03-17 00:54:13.198156 | orchestrator | Tuesday 17 March 2026 00:49:14 +0000 (0:00:00.932) 0:00:34.939 ********* 2026-03-17 00:54:13.198162 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:54:13.198169 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:54:13.198176 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:54:13.198183 | orchestrator | 2026-03-17 00:54:13.198190 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-03-17 00:54:13.198197 | orchestrator | Tuesday 17 March 2026 00:49:15 +0000 (0:00:01.460) 0:00:36.400 ********* 2026-03-17 00:54:13.198204 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:54:13.198211 | orchestrator | 2026-03-17 00:54:13.198218 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-03-17 00:54:13.198226 | orchestrator | Tuesday 17 March 2026 00:49:16 +0000 (0:00:00.776) 0:00:37.176 ********* 2026-03-17 00:54:13.198233 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:54:13.198240 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:54:13.198247 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:54:13.198254 | orchestrator | 2026-03-17 00:54:13.198262 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-03-17 00:54:13.198269 | orchestrator | Tuesday 17 March 2026 00:49:18 +0000 (0:00:01.899) 0:00:39.075 ********* 2026-03-17 00:54:13.198276 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:54:13.198283 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:54:13.198290 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:54:13.198297 | orchestrator | 2026-03-17 00:54:13.198304 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-03-17 00:54:13.198312 | orchestrator | Tuesday 17 March 2026 00:49:18 +0000 (0:00:00.650) 0:00:39.725 ********* 2026-03-17 00:54:13.198319 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:54:13.198326 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:54:13.198333 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:54:13.198340 | orchestrator | 2026-03-17 00:54:13.198348 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-03-17 00:54:13.198355 | orchestrator | Tuesday 17 March 2026 00:49:20 +0000 (0:00:01.725) 0:00:41.450 ********* 2026-03-17 00:54:13.198362 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:54:13.198369 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:54:13.198376 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:54:13.198384 | orchestrator | 2026-03-17 00:54:13.198391 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-03-17 00:54:13.198405 | orchestrator | Tuesday 17 March 2026 00:49:21 +0000 (0:00:01.371) 0:00:42.822 ********* 2026-03-17 00:54:13.198419 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:54:13.198427 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:54:13.198434 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:54:13.198441 | orchestrator | 2026-03-17 00:54:13.198449 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-03-17 00:54:13.198455 | orchestrator | Tuesday 17 March 2026 00:49:22 +0000 (0:00:00.564) 0:00:43.387 ********* 2026-03-17 00:54:13.198461 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:54:13.198468 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:54:13.198475 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:54:13.198482 | orchestrator | 2026-03-17 00:54:13.198490 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-03-17 00:54:13.198498 | orchestrator | Tuesday 17 March 2026 00:49:22 +0000 (0:00:00.408) 0:00:43.795 ********* 2026-03-17 00:54:13.198506 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:54:13.198513 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:54:13.198521 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:54:13.198528 | orchestrator | 2026-03-17 00:54:13.198535 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-03-17 00:54:13.198542 | orchestrator | Tuesday 17 March 2026 00:49:24 +0000 (0:00:01.725) 0:00:45.521 ********* 2026-03-17 00:54:13.198550 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:54:13.198557 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:54:13.198564 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:54:13.198571 | orchestrator | 2026-03-17 00:54:13.198579 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-03-17 00:54:13.198586 | orchestrator | Tuesday 17 March 2026 00:49:26 +0000 (0:00:02.171) 0:00:47.692 ********* 2026-03-17 00:54:13.198594 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:54:13.198601 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:54:13.198609 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:54:13.198617 | orchestrator | 2026-03-17 00:54:13.198625 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-03-17 00:54:13.198632 | orchestrator | Tuesday 17 March 2026 00:49:27 +0000 (0:00:00.641) 0:00:48.334 ********* 2026-03-17 00:54:13.198640 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-17 00:54:13.198648 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-17 00:54:13.198661 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-17 00:54:13.198668 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-17 00:54:13.198676 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-17 00:54:13.198683 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-17 00:54:13.198690 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-17 00:54:13.198697 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-17 00:54:13.198704 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-17 00:54:13.198712 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-17 00:54:13.198719 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-17 00:54:13.198732 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-17 00:54:13.198740 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-03-17 00:54:13.198748 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-03-17 00:54:13.198755 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-03-17 00:54:13.198762 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (15 retries left). 2026-03-17 00:54:13.198769 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (15 retries left). 2026-03-17 00:54:13.198776 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (15 retries left). 2026-03-17 00:54:13.198784 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:54:13.198791 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:54:13.198804 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:54:13.198811 | orchestrator | 2026-03-17 00:54:13.198819 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-03-17 00:54:13.198826 | orchestrator | Tuesday 17 March 2026 00:50:31 +0000 (0:01:04.524) 0:01:52.859 ********* 2026-03-17 00:54:13.198833 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:54:13.198841 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:54:13.198848 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:54:13.198855 | orchestrator | 2026-03-17 00:54:13.198862 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-03-17 00:54:13.198870 | orchestrator | Tuesday 17 March 2026 00:50:32 +0000 (0:00:00.645) 0:01:53.505 ********* 2026-03-17 00:54:13.198877 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:54:13.198885 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:54:13.198892 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:54:13.198898 | orchestrator | 2026-03-17 00:54:13.198906 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-03-17 00:54:13.198913 | orchestrator | Tuesday 17 March 2026 00:50:33 +0000 (0:00:00.983) 0:01:54.488 ********* 2026-03-17 00:54:13.198921 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:54:13.198928 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:54:13.198935 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:54:13.198943 | orchestrator | 2026-03-17 00:54:13.198950 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-03-17 00:54:13.198957 | orchestrator | Tuesday 17 March 2026 00:50:34 +0000 (0:00:01.314) 0:01:55.803 ********* 2026-03-17 00:54:13.198964 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:54:13.198971 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:54:13.198978 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:54:13.198985 | orchestrator | 2026-03-17 00:54:13.198993 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-03-17 00:54:13.199000 | orchestrator | Tuesday 17 March 2026 00:51:15 +0000 (0:00:41.095) 0:02:36.898 ********* 2026-03-17 00:54:13.199008 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:54:13.199015 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:54:13.199022 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:54:13.199030 | orchestrator | 2026-03-17 00:54:13.199037 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-03-17 00:54:13.199044 | orchestrator | Tuesday 17 March 2026 00:51:17 +0000 (0:00:01.080) 0:02:37.979 ********* 2026-03-17 00:54:13.199051 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:54:13.199059 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:54:13.199071 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:54:13.199079 | orchestrator | 2026-03-17 00:54:13.199103 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-03-17 00:54:13.199111 | orchestrator | Tuesday 17 March 2026 00:51:18 +0000 (0:00:01.012) 0:02:38.993 ********* 2026-03-17 00:54:13.199123 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:54:13.199131 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:54:13.199138 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:54:13.199146 | orchestrator | 2026-03-17 00:54:13.199153 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-03-17 00:54:13.199160 | orchestrator | Tuesday 17 March 2026 00:51:18 +0000 (0:00:00.740) 0:02:39.733 ********* 2026-03-17 00:54:13.199167 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:54:13.199175 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:54:13.199182 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:54:13.199189 | orchestrator | 2026-03-17 00:54:13.199197 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-03-17 00:54:13.199205 | orchestrator | Tuesday 17 March 2026 00:51:19 +0000 (0:00:00.632) 0:02:40.365 ********* 2026-03-17 00:54:13.199213 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:54:13.199220 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:54:13.199227 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:54:13.199235 | orchestrator | 2026-03-17 00:54:13.199242 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-03-17 00:54:13.199250 | orchestrator | Tuesday 17 March 2026 00:51:19 +0000 (0:00:00.348) 0:02:40.714 ********* 2026-03-17 00:54:13.199257 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:54:13.199265 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:54:13.199272 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:54:13.199279 | orchestrator | 2026-03-17 00:54:13.199286 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-03-17 00:54:13.199294 | orchestrator | Tuesday 17 March 2026 00:51:20 +0000 (0:00:00.927) 0:02:41.642 ********* 2026-03-17 00:54:13.199301 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:54:13.199308 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:54:13.199316 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:54:13.199323 | orchestrator | 2026-03-17 00:54:13.199330 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-03-17 00:54:13.199337 | orchestrator | Tuesday 17 March 2026 00:51:21 +0000 (0:00:00.881) 0:02:42.523 ********* 2026-03-17 00:54:13.199344 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:54:13.199351 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:54:13.199359 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:54:13.199366 | orchestrator | 2026-03-17 00:54:13.199373 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-03-17 00:54:13.199380 | orchestrator | Tuesday 17 March 2026 00:51:22 +0000 (0:00:00.895) 0:02:43.419 ********* 2026-03-17 00:54:13.199387 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:54:13.199394 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:54:13.199401 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:54:13.199409 | orchestrator | 2026-03-17 00:54:13.199416 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-03-17 00:54:13.199424 | orchestrator | Tuesday 17 March 2026 00:51:23 +0000 (0:00:00.917) 0:02:44.336 ********* 2026-03-17 00:54:13.199431 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:54:13.199438 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:54:13.199445 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:54:13.199452 | orchestrator | 2026-03-17 00:54:13.199459 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-03-17 00:54:13.199465 | orchestrator | Tuesday 17 March 2026 00:51:23 +0000 (0:00:00.393) 0:02:44.729 ********* 2026-03-17 00:54:13.199472 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:54:13.199479 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:54:13.199486 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:54:13.199499 | orchestrator | 2026-03-17 00:54:13.199512 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-03-17 00:54:13.199520 | orchestrator | Tuesday 17 March 2026 00:51:24 +0000 (0:00:00.336) 0:02:45.066 ********* 2026-03-17 00:54:13.199527 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:54:13.199535 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:54:13.199542 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:54:13.199549 | orchestrator | 2026-03-17 00:54:13.199556 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-03-17 00:54:13.199563 | orchestrator | Tuesday 17 March 2026 00:51:25 +0000 (0:00:00.873) 0:02:45.939 ********* 2026-03-17 00:54:13.199570 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:54:13.199578 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:54:13.199584 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:54:13.199591 | orchestrator | 2026-03-17 00:54:13.199598 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-03-17 00:54:13.199605 | orchestrator | Tuesday 17 March 2026 00:51:25 +0000 (0:00:00.908) 0:02:46.847 ********* 2026-03-17 00:54:13.199612 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-17 00:54:13.199619 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-17 00:54:13.199626 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-17 00:54:13.199718 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-17 00:54:13.199730 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-17 00:54:13.199737 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-17 00:54:13.199744 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-17 00:54:13.199751 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-17 00:54:13.199758 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-17 00:54:13.199766 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-03-17 00:54:13.199773 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-17 00:54:13.199785 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-17 00:54:13.199793 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-03-17 00:54:13.199800 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-17 00:54:13.199807 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-17 00:54:13.199815 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-17 00:54:13.199822 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-17 00:54:13.199830 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-17 00:54:13.199837 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-17 00:54:13.199844 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-17 00:54:13.199851 | orchestrator | 2026-03-17 00:54:13.199859 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-03-17 00:54:13.199866 | orchestrator | 2026-03-17 00:54:13.199873 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-03-17 00:54:13.199881 | orchestrator | Tuesday 17 March 2026 00:51:29 +0000 (0:00:03.786) 0:02:50.633 ********* 2026-03-17 00:54:13.199894 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:54:13.199902 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:54:13.199909 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:54:13.199916 | orchestrator | 2026-03-17 00:54:13.199923 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-03-17 00:54:13.199930 | orchestrator | Tuesday 17 March 2026 00:51:29 +0000 (0:00:00.273) 0:02:50.907 ********* 2026-03-17 00:54:13.199937 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:54:13.199945 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:54:13.199952 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:54:13.199959 | orchestrator | 2026-03-17 00:54:13.199966 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-03-17 00:54:13.199973 | orchestrator | Tuesday 17 March 2026 00:51:30 +0000 (0:00:00.639) 0:02:51.547 ********* 2026-03-17 00:54:13.199980 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:54:13.199988 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:54:13.199994 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:54:13.200001 | orchestrator | 2026-03-17 00:54:13.200008 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-03-17 00:54:13.200015 | orchestrator | Tuesday 17 March 2026 00:51:31 +0000 (0:00:00.423) 0:02:51.971 ********* 2026-03-17 00:54:13.200022 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:54:13.200030 | orchestrator | 2026-03-17 00:54:13.200037 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-03-17 00:54:13.200044 | orchestrator | Tuesday 17 March 2026 00:51:31 +0000 (0:00:00.490) 0:02:52.462 ********* 2026-03-17 00:54:13.200051 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:54:13.200058 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:54:13.200073 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:54:13.200080 | orchestrator | 2026-03-17 00:54:13.200099 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-03-17 00:54:13.200107 | orchestrator | Tuesday 17 March 2026 00:51:31 +0000 (0:00:00.343) 0:02:52.805 ********* 2026-03-17 00:54:13.200114 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:54:13.200121 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:54:13.200128 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:54:13.200135 | orchestrator | 2026-03-17 00:54:13.200142 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-03-17 00:54:13.200149 | orchestrator | Tuesday 17 March 2026 00:51:32 +0000 (0:00:00.449) 0:02:53.254 ********* 2026-03-17 00:54:13.200155 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:54:13.200162 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:54:13.200169 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:54:13.200176 | orchestrator | 2026-03-17 00:54:13.200183 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-03-17 00:54:13.200190 | orchestrator | Tuesday 17 March 2026 00:51:32 +0000 (0:00:00.328) 0:02:53.583 ********* 2026-03-17 00:54:13.200197 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:54:13.200204 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:54:13.200211 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:54:13.200218 | orchestrator | 2026-03-17 00:54:13.200225 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-03-17 00:54:13.200232 | orchestrator | Tuesday 17 March 2026 00:51:33 +0000 (0:00:00.650) 0:02:54.233 ********* 2026-03-17 00:54:13.200239 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:54:13.200247 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:54:13.200254 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:54:13.200261 | orchestrator | 2026-03-17 00:54:13.200268 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-03-17 00:54:13.200275 | orchestrator | Tuesday 17 March 2026 00:51:34 +0000 (0:00:01.122) 0:02:55.355 ********* 2026-03-17 00:54:13.200282 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:54:13.200289 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:54:13.200301 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:54:13.200308 | orchestrator | 2026-03-17 00:54:13.200315 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-03-17 00:54:13.200322 | orchestrator | Tuesday 17 March 2026 00:51:35 +0000 (0:00:01.532) 0:02:56.887 ********* 2026-03-17 00:54:13.200329 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:54:13.200336 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:54:13.200343 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:54:13.200351 | orchestrator | 2026-03-17 00:54:13.200358 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-03-17 00:54:13.200365 | orchestrator | 2026-03-17 00:54:13.200380 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-03-17 00:54:13.200387 | orchestrator | Tuesday 17 March 2026 00:51:46 +0000 (0:00:10.232) 0:03:07.120 ********* 2026-03-17 00:54:13.200394 | orchestrator | ok: [testbed-manager] 2026-03-17 00:54:13.200401 | orchestrator | 2026-03-17 00:54:13.200408 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-03-17 00:54:13.200416 | orchestrator | Tuesday 17 March 2026 00:51:46 +0000 (0:00:00.729) 0:03:07.850 ********* 2026-03-17 00:54:13.200423 | orchestrator | changed: [testbed-manager] 2026-03-17 00:54:13.200507 | orchestrator | 2026-03-17 00:54:13.200517 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-17 00:54:13.200525 | orchestrator | Tuesday 17 March 2026 00:51:47 +0000 (0:00:00.372) 0:03:08.222 ********* 2026-03-17 00:54:13.200532 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-17 00:54:13.200539 | orchestrator | 2026-03-17 00:54:13.200546 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-17 00:54:13.200553 | orchestrator | Tuesday 17 March 2026 00:51:47 +0000 (0:00:00.587) 0:03:08.810 ********* 2026-03-17 00:54:13.200560 | orchestrator | changed: [testbed-manager] 2026-03-17 00:54:13.200567 | orchestrator | 2026-03-17 00:54:13.200573 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-03-17 00:54:13.200579 | orchestrator | Tuesday 17 March 2026 00:51:48 +0000 (0:00:00.874) 0:03:09.685 ********* 2026-03-17 00:54:13.200586 | orchestrator | changed: [testbed-manager] 2026-03-17 00:54:13.200592 | orchestrator | 2026-03-17 00:54:13.200599 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-03-17 00:54:13.200606 | orchestrator | Tuesday 17 March 2026 00:51:49 +0000 (0:00:00.626) 0:03:10.311 ********* 2026-03-17 00:54:13.200613 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-17 00:54:13.200620 | orchestrator | 2026-03-17 00:54:13.200627 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-03-17 00:54:13.200635 | orchestrator | Tuesday 17 March 2026 00:51:51 +0000 (0:00:02.025) 0:03:12.337 ********* 2026-03-17 00:54:13.200641 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-17 00:54:13.200648 | orchestrator | 2026-03-17 00:54:13.200656 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-03-17 00:54:13.200663 | orchestrator | Tuesday 17 March 2026 00:51:52 +0000 (0:00:00.984) 0:03:13.322 ********* 2026-03-17 00:54:13.200669 | orchestrator | changed: [testbed-manager] 2026-03-17 00:54:13.200676 | orchestrator | 2026-03-17 00:54:13.200683 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-03-17 00:54:13.200714 | orchestrator | Tuesday 17 March 2026 00:51:52 +0000 (0:00:00.573) 0:03:13.895 ********* 2026-03-17 00:54:13.200722 | orchestrator | changed: [testbed-manager] 2026-03-17 00:54:13.200729 | orchestrator | 2026-03-17 00:54:13.200736 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-03-17 00:54:13.200743 | orchestrator | 2026-03-17 00:54:13.200750 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-03-17 00:54:13.200757 | orchestrator | Tuesday 17 March 2026 00:51:53 +0000 (0:00:00.420) 0:03:14.315 ********* 2026-03-17 00:54:13.200764 | orchestrator | ok: [testbed-manager] 2026-03-17 00:54:13.200772 | orchestrator | 2026-03-17 00:54:13.200779 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-03-17 00:54:13.200791 | orchestrator | Tuesday 17 March 2026 00:51:53 +0000 (0:00:00.140) 0:03:14.456 ********* 2026-03-17 00:54:13.200805 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-03-17 00:54:13.200813 | orchestrator | 2026-03-17 00:54:13.200820 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-03-17 00:54:13.200828 | orchestrator | Tuesday 17 March 2026 00:51:53 +0000 (0:00:00.209) 0:03:14.666 ********* 2026-03-17 00:54:13.200835 | orchestrator | ok: [testbed-manager] 2026-03-17 00:54:13.200842 | orchestrator | 2026-03-17 00:54:13.200849 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-03-17 00:54:13.200856 | orchestrator | Tuesday 17 March 2026 00:51:54 +0000 (0:00:01.124) 0:03:15.790 ********* 2026-03-17 00:54:13.200863 | orchestrator | ok: [testbed-manager] 2026-03-17 00:54:13.200870 | orchestrator | 2026-03-17 00:54:13.200877 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-03-17 00:54:13.200883 | orchestrator | Tuesday 17 March 2026 00:51:56 +0000 (0:00:01.459) 0:03:17.250 ********* 2026-03-17 00:54:13.200890 | orchestrator | changed: [testbed-manager] 2026-03-17 00:54:13.200897 | orchestrator | 2026-03-17 00:54:13.200904 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-03-17 00:54:13.200911 | orchestrator | Tuesday 17 March 2026 00:51:57 +0000 (0:00:01.051) 0:03:18.301 ********* 2026-03-17 00:54:13.200918 | orchestrator | ok: [testbed-manager] 2026-03-17 00:54:13.200925 | orchestrator | 2026-03-17 00:54:13.200932 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-03-17 00:54:13.200939 | orchestrator | Tuesday 17 March 2026 00:51:57 +0000 (0:00:00.430) 0:03:18.731 ********* 2026-03-17 00:54:13.200946 | orchestrator | changed: [testbed-manager] 2026-03-17 00:54:13.200953 | orchestrator | 2026-03-17 00:54:13.200961 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-03-17 00:54:13.200968 | orchestrator | Tuesday 17 March 2026 00:52:03 +0000 (0:00:06.052) 0:03:24.784 ********* 2026-03-17 00:54:13.200974 | orchestrator | changed: [testbed-manager] 2026-03-17 00:54:13.200982 | orchestrator | 2026-03-17 00:54:13.200989 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-03-17 00:54:13.200996 | orchestrator | Tuesday 17 March 2026 00:52:16 +0000 (0:00:12.191) 0:03:36.976 ********* 2026-03-17 00:54:13.201003 | orchestrator | ok: [testbed-manager] 2026-03-17 00:54:13.201010 | orchestrator | 2026-03-17 00:54:13.201017 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-03-17 00:54:13.201024 | orchestrator | 2026-03-17 00:54:13.201032 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-03-17 00:54:13.201039 | orchestrator | Tuesday 17 March 2026 00:52:16 +0000 (0:00:00.533) 0:03:37.509 ********* 2026-03-17 00:54:13.201046 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:54:13.201053 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:54:13.201060 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:54:13.201067 | orchestrator | 2026-03-17 00:54:13.201077 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-03-17 00:54:13.201096 | orchestrator | Tuesday 17 March 2026 00:52:17 +0000 (0:00:00.431) 0:03:37.941 ********* 2026-03-17 00:54:13.201104 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:54:13.201111 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:54:13.201118 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:54:13.201125 | orchestrator | 2026-03-17 00:54:13.201132 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-03-17 00:54:13.201139 | orchestrator | Tuesday 17 March 2026 00:52:17 +0000 (0:00:00.294) 0:03:38.235 ********* 2026-03-17 00:54:13.201146 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:54:13.201153 | orchestrator | 2026-03-17 00:54:13.201161 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-03-17 00:54:13.201173 | orchestrator | Tuesday 17 March 2026 00:52:17 +0000 (0:00:00.471) 0:03:38.706 ********* 2026-03-17 00:54:13.201180 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-17 00:54:13.201187 | orchestrator | 2026-03-17 00:54:13.201194 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-03-17 00:54:13.201201 | orchestrator | Tuesday 17 March 2026 00:52:18 +0000 (0:00:00.703) 0:03:39.410 ********* 2026-03-17 00:54:13.201207 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-17 00:54:13.201214 | orchestrator | 2026-03-17 00:54:13.201221 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-03-17 00:54:13.201228 | orchestrator | Tuesday 17 March 2026 00:52:19 +0000 (0:00:00.776) 0:03:40.187 ********* 2026-03-17 00:54:13.201235 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:54:13.201242 | orchestrator | 2026-03-17 00:54:13.201249 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-03-17 00:54:13.201256 | orchestrator | Tuesday 17 March 2026 00:52:19 +0000 (0:00:00.215) 0:03:40.403 ********* 2026-03-17 00:54:13.201263 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-17 00:54:13.201270 | orchestrator | 2026-03-17 00:54:13.201277 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-03-17 00:54:13.201284 | orchestrator | Tuesday 17 March 2026 00:52:20 +0000 (0:00:00.902) 0:03:41.306 ********* 2026-03-17 00:54:13.201291 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:54:13.201298 | orchestrator | 2026-03-17 00:54:13.201305 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-03-17 00:54:13.201312 | orchestrator | Tuesday 17 March 2026 00:52:20 +0000 (0:00:00.107) 0:03:41.414 ********* 2026-03-17 00:54:13.201319 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:54:13.201326 | orchestrator | 2026-03-17 00:54:13.201333 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-03-17 00:54:13.201340 | orchestrator | Tuesday 17 March 2026 00:52:20 +0000 (0:00:00.095) 0:03:41.510 ********* 2026-03-17 00:54:13.201347 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:54:13.201354 | orchestrator | 2026-03-17 00:54:13.201361 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-03-17 00:54:13.201368 | orchestrator | Tuesday 17 March 2026 00:52:20 +0000 (0:00:00.101) 0:03:41.611 ********* 2026-03-17 00:54:13.201375 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:54:13.201382 | orchestrator | 2026-03-17 00:54:13.201389 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-03-17 00:54:13.201401 | orchestrator | Tuesday 17 March 2026 00:52:20 +0000 (0:00:00.108) 0:03:41.720 ********* 2026-03-17 00:54:13.201408 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-17 00:54:13.201415 | orchestrator | 2026-03-17 00:54:13.201422 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-03-17 00:54:13.201429 | orchestrator | Tuesday 17 March 2026 00:52:25 +0000 (0:00:04.537) 0:03:46.258 ********* 2026-03-17 00:54:13.201437 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-03-17 00:54:13.201443 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-03-17 00:54:13.201451 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (29 retries left). 2026-03-17 00:54:13.201457 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-03-17 00:54:13.201463 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-03-17 00:54:13.201469 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-03-17 00:54:13.201475 | orchestrator | 2026-03-17 00:54:13.201482 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-03-17 00:54:13.201489 | orchestrator | Tuesday 17 March 2026 00:53:45 +0000 (0:01:20.135) 0:05:06.393 ********* 2026-03-17 00:54:13.201496 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-17 00:54:13.201503 | orchestrator | 2026-03-17 00:54:13.201510 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-03-17 00:54:13.201521 | orchestrator | Tuesday 17 March 2026 00:53:46 +0000 (0:00:01.401) 0:05:07.795 ********* 2026-03-17 00:54:13.201528 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-17 00:54:13.201534 | orchestrator | 2026-03-17 00:54:13.201541 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-03-17 00:54:13.201548 | orchestrator | Tuesday 17 March 2026 00:53:48 +0000 (0:00:01.809) 0:05:09.605 ********* 2026-03-17 00:54:13.201555 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-17 00:54:13.201562 | orchestrator | 2026-03-17 00:54:13.201569 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-03-17 00:54:13.201576 | orchestrator | Tuesday 17 March 2026 00:53:49 +0000 (0:00:01.234) 0:05:10.840 ********* 2026-03-17 00:54:13.201583 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:54:13.201590 | orchestrator | 2026-03-17 00:54:13.201597 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-03-17 00:54:13.201604 | orchestrator | Tuesday 17 March 2026 00:53:50 +0000 (0:00:00.123) 0:05:10.963 ********* 2026-03-17 00:54:13.201661 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-03-17 00:54:13.201672 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-03-17 00:54:13.201679 | orchestrator | 2026-03-17 00:54:13.201686 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-03-17 00:54:13.201694 | orchestrator | Tuesday 17 March 2026 00:53:52 +0000 (0:00:02.425) 0:05:13.388 ********* 2026-03-17 00:54:13.201700 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:54:13.201707 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:54:13.201715 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:54:13.201722 | orchestrator | 2026-03-17 00:54:13.201729 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-03-17 00:54:13.201736 | orchestrator | Tuesday 17 March 2026 00:53:52 +0000 (0:00:00.325) 0:05:13.714 ********* 2026-03-17 00:54:13.201743 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:54:13.201750 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:54:13.201757 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:54:13.201764 | orchestrator | 2026-03-17 00:54:13.201771 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-03-17 00:54:13.201779 | orchestrator | 2026-03-17 00:54:13.201786 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-03-17 00:54:13.201793 | orchestrator | Tuesday 17 March 2026 00:53:53 +0000 (0:00:00.891) 0:05:14.605 ********* 2026-03-17 00:54:13.201801 | orchestrator | ok: [testbed-manager] 2026-03-17 00:54:13.201808 | orchestrator | 2026-03-17 00:54:13.201814 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-03-17 00:54:13.201821 | orchestrator | Tuesday 17 March 2026 00:53:53 +0000 (0:00:00.139) 0:05:14.745 ********* 2026-03-17 00:54:13.201828 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-03-17 00:54:13.201835 | orchestrator | 2026-03-17 00:54:13.201842 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-03-17 00:54:13.201849 | orchestrator | Tuesday 17 March 2026 00:53:54 +0000 (0:00:00.499) 0:05:15.244 ********* 2026-03-17 00:54:13.201856 | orchestrator | changed: [testbed-manager] 2026-03-17 00:54:13.201863 | orchestrator | 2026-03-17 00:54:13.201870 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-03-17 00:54:13.201877 | orchestrator | 2026-03-17 00:54:13.201884 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-03-17 00:54:13.201891 | orchestrator | Tuesday 17 March 2026 00:54:00 +0000 (0:00:06.036) 0:05:21.281 ********* 2026-03-17 00:54:13.201898 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:54:13.201905 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:54:13.201913 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:54:13.201919 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:54:13.201927 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:54:13.201938 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:54:13.201945 | orchestrator | 2026-03-17 00:54:13.201952 | orchestrator | TASK [Manage labels] *********************************************************** 2026-03-17 00:54:13.201959 | orchestrator | Tuesday 17 March 2026 00:54:00 +0000 (0:00:00.628) 0:05:21.909 ********* 2026-03-17 00:54:13.201967 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-17 00:54:13.201974 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-17 00:54:13.201985 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-17 00:54:13.201993 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-17 00:54:13.202000 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-17 00:54:13.202006 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-17 00:54:13.202037 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-17 00:54:13.202046 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-17 00:54:13.202053 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-17 00:54:13.202060 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-17 00:54:13.202068 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-17 00:54:13.202075 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-17 00:54:13.202082 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-17 00:54:13.202099 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-17 00:54:13.202106 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-17 00:54:13.202112 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-17 00:54:13.202119 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-17 00:54:13.202126 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-17 00:54:13.202133 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-17 00:54:13.202139 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-17 00:54:13.202146 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-17 00:54:13.202153 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-17 00:54:13.202164 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-17 00:54:13.202171 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-17 00:54:13.202178 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-17 00:54:13.202185 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-17 00:54:13.202192 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-17 00:54:13.202199 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-17 00:54:13.202206 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-17 00:54:13.202213 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-17 00:54:13.202220 | orchestrator | 2026-03-17 00:54:13.202227 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-03-17 00:54:13.202239 | orchestrator | Tuesday 17 March 2026 00:54:11 +0000 (0:00:10.193) 0:05:32.103 ********* 2026-03-17 00:54:13.202246 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:54:13.202253 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:54:13.202260 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:54:13.202267 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:54:13.202274 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:54:13.202281 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:54:13.202288 | orchestrator | 2026-03-17 00:54:13.202295 | orchestrator | TASK [Manage taints] *********************************************************** 2026-03-17 00:54:13.202302 | orchestrator | Tuesday 17 March 2026 00:54:11 +0000 (0:00:00.489) 0:05:32.593 ********* 2026-03-17 00:54:13.202309 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:54:13.202316 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:54:13.202323 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:54:13.202330 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:54:13.202337 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:54:13.202344 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:54:13.202351 | orchestrator | 2026-03-17 00:54:13.202358 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:54:13.202365 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:54:13.202374 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-03-17 00:54:13.202381 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-17 00:54:13.202388 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-17 00:54:13.202400 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-17 00:54:13.202408 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-17 00:54:13.202415 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-17 00:54:13.202422 | orchestrator | 2026-03-17 00:54:13.202429 | orchestrator | 2026-03-17 00:54:13.202435 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:54:13.202442 | orchestrator | Tuesday 17 March 2026 00:54:12 +0000 (0:00:00.492) 0:05:33.086 ********* 2026-03-17 00:54:13.202449 | orchestrator | =============================================================================== 2026-03-17 00:54:13.202456 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 80.14s 2026-03-17 00:54:13.202462 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 64.53s 2026-03-17 00:54:13.202468 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 41.10s 2026-03-17 00:54:13.202475 | orchestrator | kubectl : Install required packages ------------------------------------ 12.19s 2026-03-17 00:54:13.202482 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 10.23s 2026-03-17 00:54:13.202489 | orchestrator | Manage labels ---------------------------------------------------------- 10.19s 2026-03-17 00:54:13.202496 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 7.97s 2026-03-17 00:54:13.202503 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 6.05s 2026-03-17 00:54:13.202510 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 6.04s 2026-03-17 00:54:13.202517 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 4.54s 2026-03-17 00:54:13.202528 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.79s 2026-03-17 00:54:13.202536 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 2.43s 2026-03-17 00:54:13.202543 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 2.17s 2026-03-17 00:54:13.202553 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 2.03s 2026-03-17 00:54:13.202560 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 1.90s 2026-03-17 00:54:13.202567 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 1.84s 2026-03-17 00:54:13.202574 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 1.81s 2026-03-17 00:54:13.202581 | orchestrator | k3s_prereq : Enable IPv6 forwarding ------------------------------------- 1.73s 2026-03-17 00:54:13.202587 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 1.73s 2026-03-17 00:54:13.202594 | orchestrator | k3s_server : Download vip rbac manifest to first master ----------------- 1.73s 2026-03-17 00:54:13.202601 | orchestrator | 2026-03-17 00:54:13 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:54:13.202608 | orchestrator | 2026-03-17 00:54:13 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:54:16.232738 | orchestrator | 2026-03-17 00:54:16 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:54:16.234450 | orchestrator | 2026-03-17 00:54:16 | INFO  | Task 9cd5a014-a6b4-4e32-8d43-2b490d06edae is in state STARTED 2026-03-17 00:54:16.235934 | orchestrator | 2026-03-17 00:54:16 | INFO  | Task 49e333d4-6699-497b-b4f3-b5390d6b1814 is in state STARTED 2026-03-17 00:54:16.237352 | orchestrator | 2026-03-17 00:54:16 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:54:16.238627 | orchestrator | 2026-03-17 00:54:16 | INFO  | Task 20fe4d2b-fc7f-4ca9-a611-b64218d18b2c is in state STARTED 2026-03-17 00:54:16.238817 | orchestrator | 2026-03-17 00:54:16 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:54:19.279357 | orchestrator | 2026-03-17 00:54:19 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:54:19.281336 | orchestrator | 2026-03-17 00:54:19 | INFO  | Task 9cd5a014-a6b4-4e32-8d43-2b490d06edae is in state STARTED 2026-03-17 00:54:19.281988 | orchestrator | 2026-03-17 00:54:19 | INFO  | Task 49e333d4-6699-497b-b4f3-b5390d6b1814 is in state STARTED 2026-03-17 00:54:19.282916 | orchestrator | 2026-03-17 00:54:19 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:54:19.283248 | orchestrator | 2026-03-17 00:54:19 | INFO  | Task 20fe4d2b-fc7f-4ca9-a611-b64218d18b2c is in state SUCCESS 2026-03-17 00:54:19.283337 | orchestrator | 2026-03-17 00:54:19 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:54:22.311268 | orchestrator | 2026-03-17 00:54:22 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:54:22.315579 | orchestrator | 2026-03-17 00:54:22 | INFO  | Task 9cd5a014-a6b4-4e32-8d43-2b490d06edae is in state STARTED 2026-03-17 00:54:22.316928 | orchestrator | 2026-03-17 00:54:22 | INFO  | Task 49e333d4-6699-497b-b4f3-b5390d6b1814 is in state STARTED 2026-03-17 00:54:22.318957 | orchestrator | 2026-03-17 00:54:22 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:54:22.319115 | orchestrator | 2026-03-17 00:54:22 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:54:25.366792 | orchestrator | 2026-03-17 00:54:25 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:54:25.369602 | orchestrator | 2026-03-17 00:54:25 | INFO  | Task 9cd5a014-a6b4-4e32-8d43-2b490d06edae is in state STARTED 2026-03-17 00:54:25.371954 | orchestrator | 2026-03-17 00:54:25 | INFO  | Task 49e333d4-6699-497b-b4f3-b5390d6b1814 is in state SUCCESS 2026-03-17 00:54:25.373950 | orchestrator | 2026-03-17 00:54:25 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:54:25.374368 | orchestrator | 2026-03-17 00:54:25 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:54:28.414486 | orchestrator | 2026-03-17 00:54:28 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:54:28.414578 | orchestrator | 2026-03-17 00:54:28 | INFO  | Task 9cd5a014-a6b4-4e32-8d43-2b490d06edae is in state STARTED 2026-03-17 00:54:28.414640 | orchestrator | 2026-03-17 00:54:28 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:54:28.414651 | orchestrator | 2026-03-17 00:54:28 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:54:31.446360 | orchestrator | 2026-03-17 00:54:31 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:54:31.446457 | orchestrator | 2026-03-17 00:54:31 | INFO  | Task 9cd5a014-a6b4-4e32-8d43-2b490d06edae is in state STARTED 2026-03-17 00:54:31.447361 | orchestrator | 2026-03-17 00:54:31 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:54:31.447432 | orchestrator | 2026-03-17 00:54:31 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:54:34.479214 | orchestrator | 2026-03-17 00:54:34 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:54:34.482683 | orchestrator | 2026-03-17 00:54:34 | INFO  | Task 9cd5a014-a6b4-4e32-8d43-2b490d06edae is in state SUCCESS 2026-03-17 00:54:34.484300 | orchestrator | 2026-03-17 00:54:34.484353 | orchestrator | 2026-03-17 00:54:34.484361 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-03-17 00:54:34.484367 | orchestrator | 2026-03-17 00:54:34.484373 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-17 00:54:34.484378 | orchestrator | Tuesday 17 March 2026 00:54:15 +0000 (0:00:00.285) 0:00:00.285 ********* 2026-03-17 00:54:34.484384 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-17 00:54:34.484389 | orchestrator | 2026-03-17 00:54:34.484394 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-17 00:54:34.484399 | orchestrator | Tuesday 17 March 2026 00:54:16 +0000 (0:00:00.962) 0:00:01.247 ********* 2026-03-17 00:54:34.484404 | orchestrator | changed: [testbed-manager] 2026-03-17 00:54:34.484409 | orchestrator | 2026-03-17 00:54:34.484414 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-03-17 00:54:34.484419 | orchestrator | Tuesday 17 March 2026 00:54:18 +0000 (0:00:01.472) 0:00:02.720 ********* 2026-03-17 00:54:34.484423 | orchestrator | changed: [testbed-manager] 2026-03-17 00:54:34.484428 | orchestrator | 2026-03-17 00:54:34.484432 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:54:34.484437 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:54:34.484443 | orchestrator | 2026-03-17 00:54:34.484448 | orchestrator | 2026-03-17 00:54:34.484452 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:54:34.484457 | orchestrator | Tuesday 17 March 2026 00:54:18 +0000 (0:00:00.417) 0:00:03.137 ********* 2026-03-17 00:54:34.484463 | orchestrator | =============================================================================== 2026-03-17 00:54:34.484470 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.47s 2026-03-17 00:54:34.484478 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.96s 2026-03-17 00:54:34.484485 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.42s 2026-03-17 00:54:34.484519 | orchestrator | 2026-03-17 00:54:34.484529 | orchestrator | 2026-03-17 00:54:34.484537 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-03-17 00:54:34.484545 | orchestrator | 2026-03-17 00:54:34.484552 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-03-17 00:54:34.484559 | orchestrator | Tuesday 17 March 2026 00:54:15 +0000 (0:00:00.306) 0:00:00.306 ********* 2026-03-17 00:54:34.484567 | orchestrator | ok: [testbed-manager] 2026-03-17 00:54:34.484575 | orchestrator | 2026-03-17 00:54:34.484593 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-03-17 00:54:34.484601 | orchestrator | Tuesday 17 March 2026 00:54:16 +0000 (0:00:00.971) 0:00:01.277 ********* 2026-03-17 00:54:34.484616 | orchestrator | ok: [testbed-manager] 2026-03-17 00:54:34.484623 | orchestrator | 2026-03-17 00:54:34.484631 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-17 00:54:34.484638 | orchestrator | Tuesday 17 March 2026 00:54:17 +0000 (0:00:00.527) 0:00:01.805 ********* 2026-03-17 00:54:34.484644 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-17 00:54:34.484651 | orchestrator | 2026-03-17 00:54:34.484658 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-17 00:54:34.484665 | orchestrator | Tuesday 17 March 2026 00:54:18 +0000 (0:00:00.937) 0:00:02.743 ********* 2026-03-17 00:54:34.484672 | orchestrator | changed: [testbed-manager] 2026-03-17 00:54:34.484680 | orchestrator | 2026-03-17 00:54:34.484687 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-03-17 00:54:34.484697 | orchestrator | Tuesday 17 March 2026 00:54:19 +0000 (0:00:01.006) 0:00:03.750 ********* 2026-03-17 00:54:34.484707 | orchestrator | changed: [testbed-manager] 2026-03-17 00:54:34.484715 | orchestrator | 2026-03-17 00:54:34.484722 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-03-17 00:54:34.484729 | orchestrator | Tuesday 17 March 2026 00:54:19 +0000 (0:00:00.468) 0:00:04.219 ********* 2026-03-17 00:54:34.484737 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-17 00:54:34.484745 | orchestrator | 2026-03-17 00:54:34.484752 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-03-17 00:54:34.484759 | orchestrator | Tuesday 17 March 2026 00:54:21 +0000 (0:00:01.572) 0:00:05.791 ********* 2026-03-17 00:54:34.484766 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-17 00:54:34.484773 | orchestrator | 2026-03-17 00:54:34.484779 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-03-17 00:54:34.484785 | orchestrator | Tuesday 17 March 2026 00:54:22 +0000 (0:00:00.779) 0:00:06.571 ********* 2026-03-17 00:54:34.484792 | orchestrator | ok: [testbed-manager] 2026-03-17 00:54:34.484799 | orchestrator | 2026-03-17 00:54:34.484807 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-03-17 00:54:34.484815 | orchestrator | Tuesday 17 March 2026 00:54:22 +0000 (0:00:00.358) 0:00:06.929 ********* 2026-03-17 00:54:34.484823 | orchestrator | ok: [testbed-manager] 2026-03-17 00:54:34.484831 | orchestrator | 2026-03-17 00:54:34.484840 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:54:34.484848 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:54:34.484857 | orchestrator | 2026-03-17 00:54:34.484862 | orchestrator | 2026-03-17 00:54:34.484867 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:54:34.484872 | orchestrator | Tuesday 17 March 2026 00:54:22 +0000 (0:00:00.273) 0:00:07.203 ********* 2026-03-17 00:54:34.484889 | orchestrator | =============================================================================== 2026-03-17 00:54:34.484894 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.57s 2026-03-17 00:54:34.484899 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.01s 2026-03-17 00:54:34.484904 | orchestrator | Get home directory of operator user ------------------------------------- 0.97s 2026-03-17 00:54:34.484929 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.94s 2026-03-17 00:54:34.484934 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.78s 2026-03-17 00:54:34.484939 | orchestrator | Create .kube directory -------------------------------------------------- 0.53s 2026-03-17 00:54:34.484944 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.47s 2026-03-17 00:54:34.484948 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.36s 2026-03-17 00:54:34.484953 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.27s 2026-03-17 00:54:34.484957 | orchestrator | 2026-03-17 00:54:34.484962 | orchestrator | 2026-03-17 00:54:34.484967 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-17 00:54:34.484971 | orchestrator | 2026-03-17 00:54:34.484976 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-17 00:54:34.484980 | orchestrator | Tuesday 17 March 2026 00:52:07 +0000 (0:00:00.178) 0:00:00.178 ********* 2026-03-17 00:54:34.484985 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:54:34.484990 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:54:34.484994 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:54:34.484999 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:54:34.485003 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:54:34.485008 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:54:34.485013 | orchestrator | 2026-03-17 00:54:34.485017 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-17 00:54:34.485022 | orchestrator | Tuesday 17 March 2026 00:52:08 +0000 (0:00:00.813) 0:00:00.992 ********* 2026-03-17 00:54:34.485026 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-03-17 00:54:34.485031 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-03-17 00:54:34.485036 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-03-17 00:54:34.485041 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-03-17 00:54:34.485045 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-03-17 00:54:34.485050 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-03-17 00:54:34.485054 | orchestrator | 2026-03-17 00:54:34.485059 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-03-17 00:54:34.485063 | orchestrator | 2026-03-17 00:54:34.485068 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-03-17 00:54:34.485090 | orchestrator | Tuesday 17 March 2026 00:52:09 +0000 (0:00:01.389) 0:00:02.381 ********* 2026-03-17 00:54:34.485100 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:54:34.485252 | orchestrator | 2026-03-17 00:54:34.485259 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-03-17 00:54:34.485264 | orchestrator | Tuesday 17 March 2026 00:52:10 +0000 (0:00:01.257) 0:00:03.639 ********* 2026-03-17 00:54:34.485271 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:34.485279 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:34.485285 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:34.485304 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:34.485322 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:34.485327 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:34.485332 | orchestrator | 2026-03-17 00:54:34.485337 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-03-17 00:54:34.485341 | orchestrator | Tuesday 17 March 2026 00:52:12 +0000 (0:00:01.910) 0:00:05.549 ********* 2026-03-17 00:54:34.485346 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:34.485351 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:34.485356 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:34.485360 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:34.485365 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:34.485374 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:34.485379 | orchestrator | 2026-03-17 00:54:34.485383 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-03-17 00:54:34.485390 | orchestrator | Tuesday 17 March 2026 00:52:14 +0000 (0:00:01.923) 0:00:07.473 ********* 2026-03-17 00:54:34.485399 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:34.485415 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:34.485426 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:34.485433 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:34.485440 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:34.485448 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:34.485456 | orchestrator | 2026-03-17 00:54:34.485463 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-03-17 00:54:34.485470 | orchestrator | Tuesday 17 March 2026 00:52:16 +0000 (0:00:01.731) 0:00:09.205 ********* 2026-03-17 00:54:34.485548 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:34.485571 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:34.485576 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:34.485581 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:34.485594 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:34.485599 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:34.485604 | orchestrator | 2026-03-17 00:54:34.485609 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2026-03-17 00:54:34.485613 | orchestrator | Tuesday 17 March 2026 00:52:18 +0000 (0:00:01.752) 0:00:10.957 ********* 2026-03-17 00:54:34.485618 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:34.485623 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:34.485628 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:34.485636 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:34.485644 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:34.485653 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:34.485664 | orchestrator | 2026-03-17 00:54:34.485672 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-03-17 00:54:34.485679 | orchestrator | Tuesday 17 March 2026 00:52:19 +0000 (0:00:01.630) 0:00:12.588 ********* 2026-03-17 00:54:34.485687 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:54:34.485694 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:54:34.485701 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:54:34.485708 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:54:34.485715 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:54:34.485722 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:54:34.485729 | orchestrator | 2026-03-17 00:54:34.485736 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-03-17 00:54:34.485743 | orchestrator | Tuesday 17 March 2026 00:52:22 +0000 (0:00:03.251) 0:00:15.839 ********* 2026-03-17 00:54:34.485753 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-03-17 00:54:34.485763 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-03-17 00:54:34.485770 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-03-17 00:54:34.485784 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-03-17 00:54:34.485792 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-03-17 00:54:34.485799 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-03-17 00:54:34.485807 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-17 00:54:34.485817 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-17 00:54:34.485828 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-17 00:54:34.485836 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-17 00:54:34.485844 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-17 00:54:34.485850 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-17 00:54:34.485857 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-17 00:54:34.485866 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-17 00:54:34.485880 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-17 00:54:34.485888 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-17 00:54:34.485896 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-17 00:54:34.485903 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-17 00:54:34.485911 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-17 00:54:34.485920 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-17 00:54:34.485927 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-17 00:54:34.485935 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-17 00:54:34.485943 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-17 00:54:34.485952 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-17 00:54:34.485964 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-17 00:54:34.485971 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-17 00:54:34.485978 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-17 00:54:34.485986 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-17 00:54:34.485993 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-17 00:54:34.486000 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-17 00:54:34.486008 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-17 00:54:34.486183 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-17 00:54:34.486193 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-17 00:54:34.486198 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-17 00:54:34.486203 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-17 00:54:34.486207 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-17 00:54:34.486213 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-17 00:54:34.486218 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-17 00:54:34.486228 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-17 00:54:34.486233 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-17 00:54:34.486238 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-17 00:54:34.486250 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-17 00:54:34.486256 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-03-17 00:54:34.486268 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-03-17 00:54:34.486273 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-03-17 00:54:34.486278 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-03-17 00:54:34.486283 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-03-17 00:54:34.486288 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-03-17 00:54:34.486292 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-17 00:54:34.486297 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-17 00:54:34.486302 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-17 00:54:34.486307 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-17 00:54:34.486312 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-17 00:54:34.486316 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-17 00:54:34.486321 | orchestrator | 2026-03-17 00:54:34.486326 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-17 00:54:34.486330 | orchestrator | Tuesday 17 March 2026 00:52:43 +0000 (0:00:20.268) 0:00:36.107 ********* 2026-03-17 00:54:34.486335 | orchestrator | 2026-03-17 00:54:34.486339 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-17 00:54:34.486344 | orchestrator | Tuesday 17 March 2026 00:52:43 +0000 (0:00:00.077) 0:00:36.185 ********* 2026-03-17 00:54:34.486349 | orchestrator | 2026-03-17 00:54:34.486353 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-17 00:54:34.486358 | orchestrator | Tuesday 17 March 2026 00:52:43 +0000 (0:00:00.082) 0:00:36.268 ********* 2026-03-17 00:54:34.486363 | orchestrator | 2026-03-17 00:54:34.486367 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-17 00:54:34.486372 | orchestrator | Tuesday 17 March 2026 00:52:43 +0000 (0:00:00.089) 0:00:36.358 ********* 2026-03-17 00:54:34.486377 | orchestrator | 2026-03-17 00:54:34.486381 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-17 00:54:34.486386 | orchestrator | Tuesday 17 March 2026 00:52:43 +0000 (0:00:00.069) 0:00:36.427 ********* 2026-03-17 00:54:34.486391 | orchestrator | 2026-03-17 00:54:34.486395 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-17 00:54:34.486400 | orchestrator | Tuesday 17 March 2026 00:52:43 +0000 (0:00:00.064) 0:00:36.491 ********* 2026-03-17 00:54:34.486404 | orchestrator | 2026-03-17 00:54:34.486409 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-03-17 00:54:34.486413 | orchestrator | Tuesday 17 March 2026 00:52:43 +0000 (0:00:00.063) 0:00:36.555 ********* 2026-03-17 00:54:34.486418 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:54:34.486424 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:54:34.486429 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:54:34.486433 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:54:34.486439 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:54:34.486446 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:54:34.486457 | orchestrator | 2026-03-17 00:54:34.486467 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-03-17 00:54:34.486481 | orchestrator | Tuesday 17 March 2026 00:52:45 +0000 (0:00:01.728) 0:00:38.283 ********* 2026-03-17 00:54:34.486488 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:54:34.486495 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:54:34.486502 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:54:34.486510 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:54:34.486517 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:54:34.486523 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:54:34.486530 | orchestrator | 2026-03-17 00:54:34.486537 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-03-17 00:54:34.486543 | orchestrator | 2026-03-17 00:54:34.486551 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-17 00:54:34.486558 | orchestrator | Tuesday 17 March 2026 00:53:11 +0000 (0:00:26.016) 0:01:04.300 ********* 2026-03-17 00:54:34.486565 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:54:34.486573 | orchestrator | 2026-03-17 00:54:34.486586 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-17 00:54:34.486594 | orchestrator | Tuesday 17 March 2026 00:53:11 +0000 (0:00:00.454) 0:01:04.754 ********* 2026-03-17 00:54:34.486602 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:54:34.486609 | orchestrator | 2026-03-17 00:54:34.486623 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-03-17 00:54:34.486631 | orchestrator | Tuesday 17 March 2026 00:53:12 +0000 (0:00:00.600) 0:01:05.354 ********* 2026-03-17 00:54:34.486639 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:54:34.486646 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:54:34.486655 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:54:34.486660 | orchestrator | 2026-03-17 00:54:34.486664 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-03-17 00:54:34.486669 | orchestrator | Tuesday 17 March 2026 00:53:13 +0000 (0:00:00.624) 0:01:05.979 ********* 2026-03-17 00:54:34.486674 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:54:34.486678 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:54:34.486683 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:54:34.486687 | orchestrator | 2026-03-17 00:54:34.486691 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-03-17 00:54:34.486696 | orchestrator | Tuesday 17 March 2026 00:53:13 +0000 (0:00:00.334) 0:01:06.314 ********* 2026-03-17 00:54:34.486700 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:54:34.486705 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:54:34.486709 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:54:34.486714 | orchestrator | 2026-03-17 00:54:34.486718 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-03-17 00:54:34.486723 | orchestrator | Tuesday 17 March 2026 00:53:13 +0000 (0:00:00.378) 0:01:06.693 ********* 2026-03-17 00:54:34.486727 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:54:34.486731 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:54:34.486736 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:54:34.486741 | orchestrator | 2026-03-17 00:54:34.486745 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-03-17 00:54:34.486750 | orchestrator | Tuesday 17 March 2026 00:53:14 +0000 (0:00:00.279) 0:01:06.973 ********* 2026-03-17 00:54:34.486754 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:54:34.486759 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:54:34.486763 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:54:34.486768 | orchestrator | 2026-03-17 00:54:34.486772 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-03-17 00:54:34.486777 | orchestrator | Tuesday 17 March 2026 00:53:14 +0000 (0:00:00.339) 0:01:07.313 ********* 2026-03-17 00:54:34.486781 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:54:34.486786 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:54:34.486790 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:54:34.486795 | orchestrator | 2026-03-17 00:54:34.486805 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-03-17 00:54:34.486809 | orchestrator | Tuesday 17 March 2026 00:53:14 +0000 (0:00:00.273) 0:01:07.586 ********* 2026-03-17 00:54:34.486813 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:54:34.486818 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:54:34.486822 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:54:34.486826 | orchestrator | 2026-03-17 00:54:34.486831 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-03-17 00:54:34.486836 | orchestrator | Tuesday 17 March 2026 00:53:14 +0000 (0:00:00.240) 0:01:07.827 ********* 2026-03-17 00:54:34.486840 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:54:34.486844 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:54:34.486849 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:54:34.486853 | orchestrator | 2026-03-17 00:54:34.486858 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-03-17 00:54:34.486862 | orchestrator | Tuesday 17 March 2026 00:53:15 +0000 (0:00:00.381) 0:01:08.209 ********* 2026-03-17 00:54:34.486867 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:54:34.486871 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:54:34.486875 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:54:34.486880 | orchestrator | 2026-03-17 00:54:34.486884 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-03-17 00:54:34.486888 | orchestrator | Tuesday 17 March 2026 00:53:15 +0000 (0:00:00.274) 0:01:08.483 ********* 2026-03-17 00:54:34.486893 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:54:34.486897 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:54:34.486901 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:54:34.486906 | orchestrator | 2026-03-17 00:54:34.486934 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-03-17 00:54:34.486941 | orchestrator | Tuesday 17 March 2026 00:53:15 +0000 (0:00:00.283) 0:01:08.767 ********* 2026-03-17 00:54:34.486945 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:54:34.486950 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:54:34.486954 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:54:34.486958 | orchestrator | 2026-03-17 00:54:34.486963 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-03-17 00:54:34.486967 | orchestrator | Tuesday 17 March 2026 00:53:16 +0000 (0:00:00.371) 0:01:09.138 ********* 2026-03-17 00:54:34.486972 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:54:34.486976 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:54:34.486980 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:54:34.486985 | orchestrator | 2026-03-17 00:54:34.486989 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-03-17 00:54:34.486994 | orchestrator | Tuesday 17 March 2026 00:53:16 +0000 (0:00:00.529) 0:01:09.667 ********* 2026-03-17 00:54:34.486998 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:54:34.487003 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:54:34.487007 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:54:34.487011 | orchestrator | 2026-03-17 00:54:34.487016 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-03-17 00:54:34.487021 | orchestrator | Tuesday 17 March 2026 00:53:17 +0000 (0:00:00.290) 0:01:09.958 ********* 2026-03-17 00:54:34.487025 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:54:34.487030 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:54:34.487034 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:54:34.487038 | orchestrator | 2026-03-17 00:54:34.487047 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-03-17 00:54:34.487051 | orchestrator | Tuesday 17 March 2026 00:53:17 +0000 (0:00:00.305) 0:01:10.264 ********* 2026-03-17 00:54:34.487055 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:54:34.487060 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:54:34.487064 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:54:34.487068 | orchestrator | 2026-03-17 00:54:34.487090 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-03-17 00:54:34.487115 | orchestrator | Tuesday 17 March 2026 00:53:17 +0000 (0:00:00.278) 0:01:10.543 ********* 2026-03-17 00:54:34.487124 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:54:34.487130 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:54:34.487137 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:54:34.487144 | orchestrator | 2026-03-17 00:54:34.487151 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-03-17 00:54:34.487157 | orchestrator | Tuesday 17 March 2026 00:53:18 +0000 (0:00:00.493) 0:01:11.037 ********* 2026-03-17 00:54:34.487164 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:54:34.487171 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:54:34.487177 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:54:34.487184 | orchestrator | 2026-03-17 00:54:34.487191 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-17 00:54:34.487199 | orchestrator | Tuesday 17 March 2026 00:53:18 +0000 (0:00:00.339) 0:01:11.377 ********* 2026-03-17 00:54:34.487205 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:54:34.487213 | orchestrator | 2026-03-17 00:54:34.487220 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-03-17 00:54:34.487228 | orchestrator | Tuesday 17 March 2026 00:53:19 +0000 (0:00:00.536) 0:01:11.914 ********* 2026-03-17 00:54:34.487235 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:54:34.487244 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:54:34.487248 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:54:34.487254 | orchestrator | 2026-03-17 00:54:34.487272 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-03-17 00:54:34.487288 | orchestrator | Tuesday 17 March 2026 00:53:19 +0000 (0:00:00.918) 0:01:12.832 ********* 2026-03-17 00:54:34.487296 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:54:34.487303 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:54:34.487310 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:54:34.487318 | orchestrator | 2026-03-17 00:54:34.487326 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-03-17 00:54:34.487334 | orchestrator | Tuesday 17 March 2026 00:53:20 +0000 (0:00:00.512) 0:01:13.345 ********* 2026-03-17 00:54:34.487342 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:54:34.487349 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:54:34.487356 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:54:34.487364 | orchestrator | 2026-03-17 00:54:34.487370 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-03-17 00:54:34.487375 | orchestrator | Tuesday 17 March 2026 00:53:20 +0000 (0:00:00.326) 0:01:13.671 ********* 2026-03-17 00:54:34.487380 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:54:34.487384 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:54:34.487388 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:54:34.487392 | orchestrator | 2026-03-17 00:54:34.487397 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-03-17 00:54:34.487401 | orchestrator | Tuesday 17 March 2026 00:53:21 +0000 (0:00:00.320) 0:01:13.992 ********* 2026-03-17 00:54:34.487406 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:54:34.487410 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:54:34.487414 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:54:34.487419 | orchestrator | 2026-03-17 00:54:34.487423 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-03-17 00:54:34.487428 | orchestrator | Tuesday 17 March 2026 00:53:21 +0000 (0:00:00.680) 0:01:14.672 ********* 2026-03-17 00:54:34.487432 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:54:34.487437 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:54:34.487441 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:54:34.487445 | orchestrator | 2026-03-17 00:54:34.487450 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-03-17 00:54:34.487454 | orchestrator | Tuesday 17 March 2026 00:53:22 +0000 (0:00:00.386) 0:01:15.059 ********* 2026-03-17 00:54:34.487464 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:54:34.487468 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:54:34.487473 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:54:34.487477 | orchestrator | 2026-03-17 00:54:34.487482 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-03-17 00:54:34.487486 | orchestrator | Tuesday 17 March 2026 00:53:22 +0000 (0:00:00.365) 0:01:15.424 ********* 2026-03-17 00:54:34.487490 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:54:34.487494 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:54:34.487499 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:54:34.487503 | orchestrator | 2026-03-17 00:54:34.487507 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-03-17 00:54:34.487512 | orchestrator | Tuesday 17 March 2026 00:53:22 +0000 (0:00:00.349) 0:01:15.774 ********* 2026-03-17 00:54:34.487518 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:34.487532 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:34.487544 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:34.487550 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:34.487555 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:34.487561 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:34.487566 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:34.487571 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:34.487580 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:34.487588 | orchestrator | 2026-03-17 00:54:34.487598 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-03-17 00:54:34.487606 | orchestrator | Tuesday 17 March 2026 00:53:24 +0000 (0:00:01.676) 0:01:17.450 ********* 2026-03-17 00:54:34.487614 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:34.487622 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:34.487633 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:34.487647 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:34.487655 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:34.487662 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:34.487670 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:34.487677 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:34.487690 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:34.487697 | orchestrator | 2026-03-17 00:54:34.487705 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-03-17 00:54:34.487711 | orchestrator | Tuesday 17 March 2026 00:53:29 +0000 (0:00:04.664) 0:01:22.115 ********* 2026-03-17 00:54:34.487716 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:34.487721 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:34.487726 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:34.487734 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:34.487742 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:34.487746 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:34.487751 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:34.487755 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:34.487764 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:34.487768 | orchestrator | 2026-03-17 00:54:34.487773 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-17 00:54:34.487777 | orchestrator | Tuesday 17 March 2026 00:53:31 +0000 (0:00:02.167) 0:01:24.283 ********* 2026-03-17 00:54:34.487782 | orchestrator | 2026-03-17 00:54:34.487786 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-17 00:54:34.487791 | orchestrator | Tuesday 17 March 2026 00:53:31 +0000 (0:00:00.064) 0:01:24.348 ********* 2026-03-17 00:54:34.487796 | orchestrator | 2026-03-17 00:54:34.487800 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-17 00:54:34.487805 | orchestrator | Tuesday 17 March 2026 00:53:31 +0000 (0:00:00.062) 0:01:24.411 ********* 2026-03-17 00:54:34.487810 | orchestrator | 2026-03-17 00:54:34.487816 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-03-17 00:54:34.487823 | orchestrator | Tuesday 17 March 2026 00:53:31 +0000 (0:00:00.077) 0:01:24.488 ********* 2026-03-17 00:54:34.487831 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:54:34.487838 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:54:34.487845 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:54:34.487853 | orchestrator | 2026-03-17 00:54:34.487860 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-03-17 00:54:34.487868 | orchestrator | Tuesday 17 March 2026 00:53:36 +0000 (0:00:05.046) 0:01:29.535 ********* 2026-03-17 00:54:34.487875 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:54:34.487883 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:54:34.487890 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:54:34.487897 | orchestrator | 2026-03-17 00:54:34.487904 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-03-17 00:54:34.487909 | orchestrator | Tuesday 17 March 2026 00:53:45 +0000 (0:00:08.436) 0:01:37.972 ********* 2026-03-17 00:54:34.487913 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:54:34.487917 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:54:34.487922 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:54:34.487926 | orchestrator | 2026-03-17 00:54:34.487931 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-03-17 00:54:34.487935 | orchestrator | Tuesday 17 March 2026 00:53:52 +0000 (0:00:06.986) 0:01:44.958 ********* 2026-03-17 00:54:34.487939 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:54:34.487943 | orchestrator | 2026-03-17 00:54:34.487948 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-03-17 00:54:34.487952 | orchestrator | Tuesday 17 March 2026 00:53:52 +0000 (0:00:00.116) 0:01:45.075 ********* 2026-03-17 00:54:34.487956 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:54:34.487961 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:54:34.487965 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:54:34.487970 | orchestrator | 2026-03-17 00:54:34.487974 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-03-17 00:54:34.487978 | orchestrator | Tuesday 17 March 2026 00:53:52 +0000 (0:00:00.687) 0:01:45.763 ********* 2026-03-17 00:54:34.487983 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:54:34.487987 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:54:34.487995 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:54:34.487999 | orchestrator | 2026-03-17 00:54:34.488004 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-03-17 00:54:34.488008 | orchestrator | Tuesday 17 March 2026 00:53:53 +0000 (0:00:00.564) 0:01:46.327 ********* 2026-03-17 00:54:34.488016 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:54:34.488021 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:54:34.488025 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:54:34.488029 | orchestrator | 2026-03-17 00:54:34.488037 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-03-17 00:54:34.488042 | orchestrator | Tuesday 17 March 2026 00:53:54 +0000 (0:00:01.169) 0:01:47.497 ********* 2026-03-17 00:54:34.488046 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:54:34.488050 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:54:34.488055 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:54:34.488059 | orchestrator | 2026-03-17 00:54:34.488063 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-03-17 00:54:34.488067 | orchestrator | Tuesday 17 March 2026 00:53:55 +0000 (0:00:00.737) 0:01:48.234 ********* 2026-03-17 00:54:34.488091 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:54:34.488097 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:54:34.488101 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:54:34.488106 | orchestrator | 2026-03-17 00:54:34.488110 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-03-17 00:54:34.488115 | orchestrator | Tuesday 17 March 2026 00:53:56 +0000 (0:00:01.199) 0:01:49.434 ********* 2026-03-17 00:54:34.488119 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:54:34.488123 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:54:34.488128 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:54:34.488132 | orchestrator | 2026-03-17 00:54:34.488136 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-03-17 00:54:34.488141 | orchestrator | Tuesday 17 March 2026 00:53:57 +0000 (0:00:01.306) 0:01:50.740 ********* 2026-03-17 00:54:34.488145 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:54:34.488149 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:54:34.488154 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:54:34.488158 | orchestrator | 2026-03-17 00:54:34.488162 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-03-17 00:54:34.488167 | orchestrator | Tuesday 17 March 2026 00:53:59 +0000 (0:00:01.175) 0:01:51.916 ********* 2026-03-17 00:54:34.488172 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:34.488176 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:34.488183 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:34.488187 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:34.488192 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:34.488201 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:34.488208 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:34.488218 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:34.488223 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:34.488227 | orchestrator | 2026-03-17 00:54:34.488232 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-03-17 00:54:34.488237 | orchestrator | Tuesday 17 March 2026 00:54:00 +0000 (0:00:01.834) 0:01:53.751 ********* 2026-03-17 00:54:34.488241 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:34.488246 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:34.488251 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:34.488255 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:34.488260 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:34.488268 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:34.488272 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:34.488280 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:34.488289 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:34.488293 | orchestrator | 2026-03-17 00:54:34.488298 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-03-17 00:54:34.488302 | orchestrator | Tuesday 17 March 2026 00:54:05 +0000 (0:00:05.087) 0:01:58.839 ********* 2026-03-17 00:54:34.488307 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:34.488312 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:34.488316 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:34.488321 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:34.488327 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:34.488334 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:34.488346 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:34.488353 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:34.488362 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:34.488369 | orchestrator | 2026-03-17 00:54:34.488376 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-17 00:54:34.488382 | orchestrator | Tuesday 17 March 2026 00:54:09 +0000 (0:00:03.324) 0:02:02.163 ********* 2026-03-17 00:54:34.488388 | orchestrator | 2026-03-17 00:54:34.488395 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-17 00:54:34.488406 | orchestrator | Tuesday 17 March 2026 00:54:09 +0000 (0:00:00.080) 0:02:02.244 ********* 2026-03-17 00:54:34.488414 | orchestrator | 2026-03-17 00:54:34.488422 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-17 00:54:34.488429 | orchestrator | Tuesday 17 March 2026 00:54:09 +0000 (0:00:00.070) 0:02:02.314 ********* 2026-03-17 00:54:34.488436 | orchestrator | 2026-03-17 00:54:34.488442 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-03-17 00:54:34.488450 | orchestrator | Tuesday 17 March 2026 00:54:09 +0000 (0:00:00.161) 0:02:02.476 ********* 2026-03-17 00:54:34.488456 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:54:34.488461 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:54:34.488465 | orchestrator | 2026-03-17 00:54:34.488470 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-03-17 00:54:34.488474 | orchestrator | Tuesday 17 March 2026 00:54:16 +0000 (0:00:06.593) 0:02:09.069 ********* 2026-03-17 00:54:34.488478 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:54:34.488482 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:54:34.488487 | orchestrator | 2026-03-17 00:54:34.488491 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-03-17 00:54:34.488495 | orchestrator | Tuesday 17 March 2026 00:54:22 +0000 (0:00:06.021) 0:02:15.091 ********* 2026-03-17 00:54:34.488500 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:54:34.488504 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:54:34.488508 | orchestrator | 2026-03-17 00:54:34.488513 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-03-17 00:54:34.488517 | orchestrator | Tuesday 17 March 2026 00:54:28 +0000 (0:00:06.053) 0:02:21.145 ********* 2026-03-17 00:54:34.488521 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:54:34.488526 | orchestrator | 2026-03-17 00:54:34.488530 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-03-17 00:54:34.488534 | orchestrator | Tuesday 17 March 2026 00:54:28 +0000 (0:00:00.123) 0:02:21.268 ********* 2026-03-17 00:54:34.488544 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:54:34.488548 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:54:34.488553 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:54:34.488557 | orchestrator | 2026-03-17 00:54:34.488562 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-03-17 00:54:34.488566 | orchestrator | Tuesday 17 March 2026 00:54:29 +0000 (0:00:00.781) 0:02:22.050 ********* 2026-03-17 00:54:34.488570 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:54:34.488574 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:54:34.488579 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:54:34.488583 | orchestrator | 2026-03-17 00:54:34.488587 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-03-17 00:54:34.488592 | orchestrator | Tuesday 17 March 2026 00:54:29 +0000 (0:00:00.756) 0:02:22.806 ********* 2026-03-17 00:54:34.488596 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:54:34.488600 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:54:34.488605 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:54:34.488609 | orchestrator | 2026-03-17 00:54:34.488613 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-03-17 00:54:34.488618 | orchestrator | Tuesday 17 March 2026 00:54:30 +0000 (0:00:00.749) 0:02:23.556 ********* 2026-03-17 00:54:34.488622 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:54:34.488626 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:54:34.488631 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:54:34.488635 | orchestrator | 2026-03-17 00:54:34.488639 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-03-17 00:54:34.488643 | orchestrator | Tuesday 17 March 2026 00:54:31 +0000 (0:00:00.636) 0:02:24.192 ********* 2026-03-17 00:54:34.488648 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:54:34.488652 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:54:34.488656 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:54:34.488661 | orchestrator | 2026-03-17 00:54:34.488665 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-03-17 00:54:34.488669 | orchestrator | Tuesday 17 March 2026 00:54:32 +0000 (0:00:00.736) 0:02:24.928 ********* 2026-03-17 00:54:34.488674 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:54:34.488678 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:54:34.488682 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:54:34.488687 | orchestrator | 2026-03-17 00:54:34.488691 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:54:34.488699 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-17 00:54:34.488707 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-03-17 00:54:34.488715 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-03-17 00:54:34.488721 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:54:34.488730 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:54:34.488739 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:54:34.488747 | orchestrator | 2026-03-17 00:54:34.488753 | orchestrator | 2026-03-17 00:54:34.488768 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:54:34.488775 | orchestrator | Tuesday 17 March 2026 00:54:33 +0000 (0:00:01.151) 0:02:26.080 ********* 2026-03-17 00:54:34.488782 | orchestrator | =============================================================================== 2026-03-17 00:54:34.488789 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 26.02s 2026-03-17 00:54:34.488807 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 20.27s 2026-03-17 00:54:34.488814 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 14.46s 2026-03-17 00:54:34.488821 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 13.04s 2026-03-17 00:54:34.488828 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 11.64s 2026-03-17 00:54:34.488835 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 5.09s 2026-03-17 00:54:34.488842 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.66s 2026-03-17 00:54:34.488848 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.32s 2026-03-17 00:54:34.488855 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 3.25s 2026-03-17 00:54:34.488862 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.17s 2026-03-17 00:54:34.488867 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.92s 2026-03-17 00:54:34.488871 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.91s 2026-03-17 00:54:34.488876 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.83s 2026-03-17 00:54:34.488880 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.75s 2026-03-17 00:54:34.488884 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.73s 2026-03-17 00:54:34.488889 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.73s 2026-03-17 00:54:34.488894 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.68s 2026-03-17 00:54:34.488898 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.63s 2026-03-17 00:54:34.488903 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.39s 2026-03-17 00:54:34.488907 | orchestrator | ovn-db : Wait for ovn-sb-db --------------------------------------------- 1.31s 2026-03-17 00:54:34.488912 | orchestrator | 2026-03-17 00:54:34 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:54:34.488916 | orchestrator | 2026-03-17 00:54:34 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:54:37.520738 | orchestrator | 2026-03-17 00:54:37 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:54:37.520852 | orchestrator | 2026-03-17 00:54:37 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:54:37.520876 | orchestrator | 2026-03-17 00:54:37 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:54:40.556565 | orchestrator | 2026-03-17 00:54:40 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:54:40.559830 | orchestrator | 2026-03-17 00:54:40 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:54:40.559912 | orchestrator | 2026-03-17 00:54:40 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:54:43.609342 | orchestrator | 2026-03-17 00:54:43 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:54:43.611916 | orchestrator | 2026-03-17 00:54:43 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:54:43.612164 | orchestrator | 2026-03-17 00:54:43 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:54:46.658455 | orchestrator | 2026-03-17 00:54:46 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:54:46.660551 | orchestrator | 2026-03-17 00:54:46 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:54:46.660640 | orchestrator | 2026-03-17 00:54:46 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:54:49.703279 | orchestrator | 2026-03-17 00:54:49 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:54:49.705445 | orchestrator | 2026-03-17 00:54:49 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:54:49.705511 | orchestrator | 2026-03-17 00:54:49 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:54:52.751445 | orchestrator | 2026-03-17 00:54:52 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:54:52.752107 | orchestrator | 2026-03-17 00:54:52 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:54:52.752354 | orchestrator | 2026-03-17 00:54:52 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:54:55.780839 | orchestrator | 2026-03-17 00:54:55 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:54:55.781007 | orchestrator | 2026-03-17 00:54:55 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:54:55.781022 | orchestrator | 2026-03-17 00:54:55 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:54:58.822098 | orchestrator | 2026-03-17 00:54:58 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:54:58.822360 | orchestrator | 2026-03-17 00:54:58 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:54:58.822380 | orchestrator | 2026-03-17 00:54:58 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:55:01.859379 | orchestrator | 2026-03-17 00:55:01 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:55:01.860330 | orchestrator | 2026-03-17 00:55:01 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:55:01.860390 | orchestrator | 2026-03-17 00:55:01 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:55:04.892805 | orchestrator | 2026-03-17 00:55:04 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:55:04.897136 | orchestrator | 2026-03-17 00:55:04 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:55:04.897857 | orchestrator | 2026-03-17 00:55:04 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:55:07.928804 | orchestrator | 2026-03-17 00:55:07 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:55:07.929264 | orchestrator | 2026-03-17 00:55:07 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:55:07.929343 | orchestrator | 2026-03-17 00:55:07 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:55:10.963146 | orchestrator | 2026-03-17 00:55:10 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:55:10.965413 | orchestrator | 2026-03-17 00:55:10 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:55:10.965465 | orchestrator | 2026-03-17 00:55:10 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:55:13.999831 | orchestrator | 2026-03-17 00:55:13 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:55:14.001750 | orchestrator | 2026-03-17 00:55:14 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:55:14.001807 | orchestrator | 2026-03-17 00:55:14 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:55:17.041238 | orchestrator | 2026-03-17 00:55:17 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:55:17.041283 | orchestrator | 2026-03-17 00:55:17 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:55:17.041303 | orchestrator | 2026-03-17 00:55:17 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:55:20.112768 | orchestrator | 2026-03-17 00:55:20 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:55:20.115328 | orchestrator | 2026-03-17 00:55:20 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:55:20.115371 | orchestrator | 2026-03-17 00:55:20 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:55:23.145939 | orchestrator | 2026-03-17 00:55:23 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:55:23.147801 | orchestrator | 2026-03-17 00:55:23 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:55:23.147852 | orchestrator | 2026-03-17 00:55:23 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:55:26.181396 | orchestrator | 2026-03-17 00:55:26 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:55:26.182457 | orchestrator | 2026-03-17 00:55:26 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:55:26.182491 | orchestrator | 2026-03-17 00:55:26 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:55:29.214341 | orchestrator | 2026-03-17 00:55:29 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:55:29.216386 | orchestrator | 2026-03-17 00:55:29 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:55:29.216728 | orchestrator | 2026-03-17 00:55:29 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:55:32.256259 | orchestrator | 2026-03-17 00:55:32 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:55:32.257798 | orchestrator | 2026-03-17 00:55:32 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:55:32.257865 | orchestrator | 2026-03-17 00:55:32 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:55:35.303444 | orchestrator | 2026-03-17 00:55:35 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:55:35.303511 | orchestrator | 2026-03-17 00:55:35 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:55:35.303519 | orchestrator | 2026-03-17 00:55:35 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:55:38.345025 | orchestrator | 2026-03-17 00:55:38 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:55:38.345356 | orchestrator | 2026-03-17 00:55:38 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:55:38.345395 | orchestrator | 2026-03-17 00:55:38 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:55:41.389616 | orchestrator | 2026-03-17 00:55:41 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:55:41.389699 | orchestrator | 2026-03-17 00:55:41 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:55:41.389711 | orchestrator | 2026-03-17 00:55:41 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:55:44.431688 | orchestrator | 2026-03-17 00:55:44 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:55:44.433389 | orchestrator | 2026-03-17 00:55:44 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:55:44.433566 | orchestrator | 2026-03-17 00:55:44 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:55:47.468282 | orchestrator | 2026-03-17 00:55:47 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:55:47.468791 | orchestrator | 2026-03-17 00:55:47 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:55:47.468901 | orchestrator | 2026-03-17 00:55:47 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:55:50.512723 | orchestrator | 2026-03-17 00:55:50 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:55:50.515181 | orchestrator | 2026-03-17 00:55:50 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:55:50.515241 | orchestrator | 2026-03-17 00:55:50 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:55:53.563737 | orchestrator | 2026-03-17 00:55:53 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:55:53.564488 | orchestrator | 2026-03-17 00:55:53 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:55:53.564512 | orchestrator | 2026-03-17 00:55:53 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:55:56.616271 | orchestrator | 2026-03-17 00:55:56 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:55:56.616322 | orchestrator | 2026-03-17 00:55:56 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:55:56.616327 | orchestrator | 2026-03-17 00:55:56 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:55:59.658670 | orchestrator | 2026-03-17 00:55:59 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:55:59.658872 | orchestrator | 2026-03-17 00:55:59 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:55:59.658882 | orchestrator | 2026-03-17 00:55:59 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:56:02.692652 | orchestrator | 2026-03-17 00:56:02 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:56:02.694342 | orchestrator | 2026-03-17 00:56:02 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:56:02.694393 | orchestrator | 2026-03-17 00:56:02 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:56:05.737916 | orchestrator | 2026-03-17 00:56:05 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:56:05.737963 | orchestrator | 2026-03-17 00:56:05 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:56:05.737968 | orchestrator | 2026-03-17 00:56:05 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:56:08.770986 | orchestrator | 2026-03-17 00:56:08 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:56:08.771780 | orchestrator | 2026-03-17 00:56:08 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:56:08.771881 | orchestrator | 2026-03-17 00:56:08 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:56:11.819984 | orchestrator | 2026-03-17 00:56:11 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:56:11.820115 | orchestrator | 2026-03-17 00:56:11 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:56:11.820130 | orchestrator | 2026-03-17 00:56:11 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:56:14.858400 | orchestrator | 2026-03-17 00:56:14 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:56:14.859944 | orchestrator | 2026-03-17 00:56:14 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:56:14.859997 | orchestrator | 2026-03-17 00:56:14 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:56:17.903214 | orchestrator | 2026-03-17 00:56:17 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:56:17.903478 | orchestrator | 2026-03-17 00:56:17 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:56:17.903497 | orchestrator | 2026-03-17 00:56:17 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:56:20.937605 | orchestrator | 2026-03-17 00:56:20 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:56:20.938744 | orchestrator | 2026-03-17 00:56:20 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:56:20.938936 | orchestrator | 2026-03-17 00:56:20 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:56:23.981548 | orchestrator | 2026-03-17 00:56:23 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:56:23.983945 | orchestrator | 2026-03-17 00:56:23 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:56:23.984207 | orchestrator | 2026-03-17 00:56:23 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:56:27.025982 | orchestrator | 2026-03-17 00:56:27 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:56:27.027425 | orchestrator | 2026-03-17 00:56:27 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:56:27.027478 | orchestrator | 2026-03-17 00:56:27 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:56:30.060946 | orchestrator | 2026-03-17 00:56:30 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:56:30.062851 | orchestrator | 2026-03-17 00:56:30 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:56:30.062937 | orchestrator | 2026-03-17 00:56:30 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:56:33.114704 | orchestrator | 2026-03-17 00:56:33 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:56:33.116458 | orchestrator | 2026-03-17 00:56:33 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:56:33.116521 | orchestrator | 2026-03-17 00:56:33 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:56:36.148363 | orchestrator | 2026-03-17 00:56:36 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:56:36.150480 | orchestrator | 2026-03-17 00:56:36 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:56:36.150552 | orchestrator | 2026-03-17 00:56:36 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:56:39.183281 | orchestrator | 2026-03-17 00:56:39 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:56:39.184601 | orchestrator | 2026-03-17 00:56:39 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:56:39.184686 | orchestrator | 2026-03-17 00:56:39 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:56:42.231682 | orchestrator | 2026-03-17 00:56:42 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:56:42.235906 | orchestrator | 2026-03-17 00:56:42 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:56:42.236130 | orchestrator | 2026-03-17 00:56:42 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:56:45.282153 | orchestrator | 2026-03-17 00:56:45 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:56:45.284545 | orchestrator | 2026-03-17 00:56:45 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:56:45.284661 | orchestrator | 2026-03-17 00:56:45 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:56:48.332734 | orchestrator | 2026-03-17 00:56:48 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:56:48.332804 | orchestrator | 2026-03-17 00:56:48 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:56:48.332810 | orchestrator | 2026-03-17 00:56:48 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:56:51.362795 | orchestrator | 2026-03-17 00:56:51 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:56:51.362928 | orchestrator | 2026-03-17 00:56:51 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:56:51.362940 | orchestrator | 2026-03-17 00:56:51 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:56:54.404615 | orchestrator | 2026-03-17 00:56:54 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:56:54.407666 | orchestrator | 2026-03-17 00:56:54 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:56:54.407789 | orchestrator | 2026-03-17 00:56:54 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:56:57.457290 | orchestrator | 2026-03-17 00:56:57 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:56:57.458425 | orchestrator | 2026-03-17 00:56:57 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:56:57.459064 | orchestrator | 2026-03-17 00:56:57 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:57:00.506103 | orchestrator | 2026-03-17 00:57:00 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state STARTED 2026-03-17 00:57:00.508306 | orchestrator | 2026-03-17 00:57:00 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:57:00.508366 | orchestrator | 2026-03-17 00:57:00 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:57:03.553424 | orchestrator | 2026-03-17 00:57:03.553505 | orchestrator | 2026-03-17 00:57:03.553516 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-17 00:57:03.553525 | orchestrator | 2026-03-17 00:57:03.553533 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-17 00:57:03.553541 | orchestrator | Tuesday 17 March 2026 00:51:02 +0000 (0:00:00.288) 0:00:00.288 ********* 2026-03-17 00:57:03.553549 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:57:03.553558 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:57:03.553566 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:57:03.553574 | orchestrator | 2026-03-17 00:57:03.553582 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-17 00:57:03.553590 | orchestrator | Tuesday 17 March 2026 00:51:03 +0000 (0:00:00.274) 0:00:00.563 ********* 2026-03-17 00:57:03.553598 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-03-17 00:57:03.553606 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-03-17 00:57:03.553614 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-03-17 00:57:03.553622 | orchestrator | 2026-03-17 00:57:03.553629 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-03-17 00:57:03.553637 | orchestrator | 2026-03-17 00:57:03.553644 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-03-17 00:57:03.553652 | orchestrator | Tuesday 17 March 2026 00:51:03 +0000 (0:00:00.304) 0:00:00.867 ********* 2026-03-17 00:57:03.553660 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:57:03.553668 | orchestrator | 2026-03-17 00:57:03.553676 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-03-17 00:57:03.553683 | orchestrator | Tuesday 17 March 2026 00:51:04 +0000 (0:00:00.598) 0:00:01.466 ********* 2026-03-17 00:57:03.553713 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:57:03.553721 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:57:03.553728 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:57:03.553736 | orchestrator | 2026-03-17 00:57:03.553744 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-03-17 00:57:03.553751 | orchestrator | Tuesday 17 March 2026 00:51:05 +0000 (0:00:01.349) 0:00:02.816 ********* 2026-03-17 00:57:03.553759 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:57:03.553767 | orchestrator | 2026-03-17 00:57:03.553774 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-03-17 00:57:03.553782 | orchestrator | Tuesday 17 March 2026 00:51:06 +0000 (0:00:00.645) 0:00:03.461 ********* 2026-03-17 00:57:03.553789 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:57:03.553797 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:57:03.553805 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:57:03.553812 | orchestrator | 2026-03-17 00:57:03.553820 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-03-17 00:57:03.553828 | orchestrator | Tuesday 17 March 2026 00:51:06 +0000 (0:00:00.828) 0:00:04.290 ********* 2026-03-17 00:57:03.553835 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-17 00:57:03.553843 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-17 00:57:03.553851 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-17 00:57:03.553870 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-17 00:57:03.553878 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-17 00:57:03.553885 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-17 00:57:03.553896 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-17 00:57:03.553908 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-17 00:57:03.553920 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-17 00:57:03.553932 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-17 00:57:03.553944 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-17 00:57:03.553956 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-17 00:57:03.553968 | orchestrator | 2026-03-17 00:57:03.553979 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-17 00:57:03.553992 | orchestrator | Tuesday 17 March 2026 00:51:09 +0000 (0:00:02.887) 0:00:07.178 ********* 2026-03-17 00:57:03.554079 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-03-17 00:57:03.554095 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-03-17 00:57:03.554109 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-03-17 00:57:03.554123 | orchestrator | 2026-03-17 00:57:03.554135 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-17 00:57:03.554149 | orchestrator | Tuesday 17 March 2026 00:51:10 +0000 (0:00:00.781) 0:00:07.959 ********* 2026-03-17 00:57:03.554340 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-03-17 00:57:03.554356 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-03-17 00:57:03.554369 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-03-17 00:57:03.554382 | orchestrator | 2026-03-17 00:57:03.554395 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-17 00:57:03.554408 | orchestrator | Tuesday 17 March 2026 00:51:12 +0000 (0:00:01.609) 0:00:09.569 ********* 2026-03-17 00:57:03.554422 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-03-17 00:57:03.554445 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:03.554467 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-03-17 00:57:03.554474 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:03.554482 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-03-17 00:57:03.554489 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:03.554496 | orchestrator | 2026-03-17 00:57:03.554503 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-03-17 00:57:03.554511 | orchestrator | Tuesday 17 March 2026 00:51:13 +0000 (0:00:01.069) 0:00:10.638 ********* 2026-03-17 00:57:03.554520 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-17 00:57:03.554534 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-17 00:57:03.554547 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-17 00:57:03.554567 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-17 00:57:03.554581 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-17 00:57:03.554609 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-17 00:57:03.554625 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-17 00:57:03.554634 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-17 00:57:03.554641 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-17 00:57:03.554649 | orchestrator | 2026-03-17 00:57:03.554657 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-03-17 00:57:03.554664 | orchestrator | Tuesday 17 March 2026 00:51:15 +0000 (0:00:02.562) 0:00:13.201 ********* 2026-03-17 00:57:03.554672 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:57:03.554679 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:57:03.554686 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:57:03.554693 | orchestrator | 2026-03-17 00:57:03.554702 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-03-17 00:57:03.554714 | orchestrator | Tuesday 17 March 2026 00:51:17 +0000 (0:00:01.488) 0:00:14.689 ********* 2026-03-17 00:57:03.554726 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-03-17 00:57:03.554739 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-03-17 00:57:03.554750 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-03-17 00:57:03.554763 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-03-17 00:57:03.554775 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-03-17 00:57:03.554787 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-03-17 00:57:03.554799 | orchestrator | 2026-03-17 00:57:03.554817 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-03-17 00:57:03.554830 | orchestrator | Tuesday 17 March 2026 00:51:19 +0000 (0:00:02.364) 0:00:17.054 ********* 2026-03-17 00:57:03.554841 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:57:03.554854 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:57:03.554866 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:57:03.554878 | orchestrator | 2026-03-17 00:57:03.554890 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-03-17 00:57:03.554901 | orchestrator | Tuesday 17 March 2026 00:51:20 +0000 (0:00:01.128) 0:00:18.183 ********* 2026-03-17 00:57:03.554913 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:57:03.554929 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:57:03.554936 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:57:03.554944 | orchestrator | 2026-03-17 00:57:03.554951 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-03-17 00:57:03.554958 | orchestrator | Tuesday 17 March 2026 00:51:23 +0000 (0:00:02.800) 0:00:20.983 ********* 2026-03-17 00:57:03.554966 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-17 00:57:03.555071 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-17 00:57:03.555084 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-17 00:57:03.555094 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e8804baeb6a8ef534ecfda42b605b9f2c371c74a', '__omit_place_holder__e8804baeb6a8ef534ecfda42b605b9f2c371c74a'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-17 00:57:03.555102 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:03.555110 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-17 00:57:03.555124 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-17 00:57:03.555139 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-17 00:57:03.555147 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e8804baeb6a8ef534ecfda42b605b9f2c371c74a', '__omit_place_holder__e8804baeb6a8ef534ecfda42b605b9f2c371c74a'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-17 00:57:03.555154 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:03.555170 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-17 00:57:03.555178 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-17 00:57:03.555187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-17 00:57:03.555194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e8804baeb6a8ef534ecfda42b605b9f2c371c74a', '__omit_place_holder__e8804baeb6a8ef534ecfda42b605b9f2c371c74a'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-17 00:57:03.555211 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:03.555218 | orchestrator | 2026-03-17 00:57:03.555236 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-03-17 00:57:03.555293 | orchestrator | Tuesday 17 March 2026 00:51:24 +0000 (0:00:00.855) 0:00:21.839 ********* 2026-03-17 00:57:03.555302 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-17 00:57:03.555310 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-17 00:57:03.555326 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-17 00:57:03.555334 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-17 00:57:03.555346 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-17 00:57:03.555359 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e8804baeb6a8ef534ecfda42b605b9f2c371c74a', '__omit_place_holder__e8804baeb6a8ef534ecfda42b605b9f2c371c74a'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-17 00:57:03.555385 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-17 00:57:03.555399 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-17 00:57:03.555412 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e8804baeb6a8ef534ecfda42b605b9f2c371c74a', '__omit_place_holder__e8804baeb6a8ef534ecfda42b605b9f2c371c74a'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-17 00:57:03.555430 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-17 00:57:03.555438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-17 00:57:03.555446 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e8804baeb6a8ef534ecfda42b605b9f2c371c74a', '__omit_place_holder__e8804baeb6a8ef534ecfda42b605b9f2c371c74a'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-17 00:57:03.555457 | orchestrator | 2026-03-17 00:57:03.555476 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-03-17 00:57:03.555493 | orchestrator | Tuesday 17 March 2026 00:51:27 +0000 (0:00:03.320) 0:00:25.159 ********* 2026-03-17 00:57:03.555547 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-17 00:57:03.555561 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-17 00:57:03.555574 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-17 00:57:03.555596 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-17 00:57:03.555608 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-17 00:57:03.555620 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-17 00:57:03.555687 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-17 00:57:03.555711 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-17 00:57:03.555725 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-17 00:57:03.555737 | orchestrator | 2026-03-17 00:57:03.555750 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-03-17 00:57:03.555764 | orchestrator | Tuesday 17 March 2026 00:51:30 +0000 (0:00:03.173) 0:00:28.333 ********* 2026-03-17 00:57:03.555774 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-17 00:57:03.555783 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-17 00:57:03.555790 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-17 00:57:03.555797 | orchestrator | 2026-03-17 00:57:03.555805 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-03-17 00:57:03.555812 | orchestrator | Tuesday 17 March 2026 00:51:33 +0000 (0:00:02.952) 0:00:31.285 ********* 2026-03-17 00:57:03.555819 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-17 00:57:03.555827 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-17 00:57:03.555834 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-17 00:57:03.555841 | orchestrator | 2026-03-17 00:57:03.555856 | orchestrator | T2026-03-17 00:57:03 | INFO  | Task a3064a57-0855-48c2-918a-286fdffb6cdd is in state SUCCESS 2026-03-17 00:57:03.555864 | orchestrator | ASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-03-17 00:57:03.555871 | orchestrator | Tuesday 17 March 2026 00:51:37 +0000 (0:00:03.970) 0:00:35.256 ********* 2026-03-17 00:57:03.555878 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:03.555886 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:03.555893 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:03.555900 | orchestrator | 2026-03-17 00:57:03.555907 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-03-17 00:57:03.555915 | orchestrator | Tuesday 17 March 2026 00:51:38 +0000 (0:00:00.987) 0:00:36.244 ********* 2026-03-17 00:57:03.555922 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-17 00:57:03.555930 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-17 00:57:03.555944 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-17 00:57:03.555952 | orchestrator | 2026-03-17 00:57:03.555959 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-03-17 00:57:03.555966 | orchestrator | Tuesday 17 March 2026 00:51:41 +0000 (0:00:02.483) 0:00:38.727 ********* 2026-03-17 00:57:03.555973 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-17 00:57:03.556273 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-17 00:57:03.556289 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-17 00:57:03.556297 | orchestrator | 2026-03-17 00:57:03.556305 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-03-17 00:57:03.556312 | orchestrator | Tuesday 17 March 2026 00:51:43 +0000 (0:00:01.972) 0:00:40.700 ********* 2026-03-17 00:57:03.556320 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-03-17 00:57:03.556328 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-03-17 00:57:03.556335 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-03-17 00:57:03.556343 | orchestrator | 2026-03-17 00:57:03.556350 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-03-17 00:57:03.556357 | orchestrator | Tuesday 17 March 2026 00:51:44 +0000 (0:00:01.553) 0:00:42.253 ********* 2026-03-17 00:57:03.556365 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-03-17 00:57:03.556372 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-03-17 00:57:03.556380 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-03-17 00:57:03.556387 | orchestrator | 2026-03-17 00:57:03.556394 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-03-17 00:57:03.556401 | orchestrator | Tuesday 17 March 2026 00:51:46 +0000 (0:00:01.740) 0:00:43.994 ********* 2026-03-17 00:57:03.556415 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:57:03.556423 | orchestrator | 2026-03-17 00:57:03.556430 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2026-03-17 00:57:03.556437 | orchestrator | Tuesday 17 March 2026 00:51:47 +0000 (0:00:00.638) 0:00:44.632 ********* 2026-03-17 00:57:03.556446 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-17 00:57:03.556455 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-17 00:57:03.556475 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-17 00:57:03.556492 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-17 00:57:03.556501 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-17 00:57:03.556513 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-17 00:57:03.556529 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-17 00:57:03.556549 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-17 00:57:03.556564 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-17 00:57:03.556584 | orchestrator | 2026-03-17 00:57:03.556595 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2026-03-17 00:57:03.556605 | orchestrator | Tuesday 17 March 2026 00:51:50 +0000 (0:00:03.590) 0:00:48.223 ********* 2026-03-17 00:57:03.556623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-17 00:57:03.556634 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-17 00:57:03.556643 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-17 00:57:03.556653 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:03.556701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-17 00:57:03.556725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-17 00:57:03.556738 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-17 00:57:03.556764 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-17 00:57:03.556777 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:03.556789 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-17 00:57:03.556803 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-17 00:57:03.556812 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:03.556823 | orchestrator | 2026-03-17 00:57:03.556834 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2026-03-17 00:57:03.556907 | orchestrator | Tuesday 17 March 2026 00:51:51 +0000 (0:00:00.679) 0:00:48.902 ********* 2026-03-17 00:57:03.556926 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-17 00:57:03.556945 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-17 00:57:03.556957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-17 00:57:03.557134 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:03.557144 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-17 00:57:03.557161 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-17 00:57:03.557168 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-17 00:57:03.557175 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:03.557182 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-17 00:57:03.557189 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-17 00:57:03.557201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-17 00:57:03.557208 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:03.557215 | orchestrator | 2026-03-17 00:57:03.557222 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-03-17 00:57:03.557235 | orchestrator | Tuesday 17 March 2026 00:51:53 +0000 (0:00:01.623) 0:00:50.525 ********* 2026-03-17 00:57:03.557242 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-17 00:57:03.557254 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-17 00:57:03.557261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-17 00:57:03.557268 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:03.557275 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-17 00:57:03.557282 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-17 00:57:03.557292 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-17 00:57:03.557299 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:03.557306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-17 00:57:03.557317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-17 00:57:03.557329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-17 00:57:03.557337 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:03.557344 | orchestrator | 2026-03-17 00:57:03.557351 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-03-17 00:57:03.557357 | orchestrator | Tuesday 17 March 2026 00:51:53 +0000 (0:00:00.692) 0:00:51.218 ********* 2026-03-17 00:57:03.557364 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-17 00:57:03.557372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-17 00:57:03.557378 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-17 00:57:03.557385 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:03.557395 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-17 00:57:03.557407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-17 00:57:03.557414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-17 00:57:03.557421 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:03.557433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-17 00:57:03.557440 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-17 00:57:03.557447 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-17 00:57:03.557454 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:03.557461 | orchestrator | 2026-03-17 00:57:03.557467 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-03-17 00:57:03.557474 | orchestrator | Tuesday 17 March 2026 00:51:54 +0000 (0:00:00.907) 0:00:52.125 ********* 2026-03-17 00:57:03.557481 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-17 00:57:03.557492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-17 00:57:03.557499 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-17 00:57:03.557506 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:03.558547 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-17 00:57:03.558604 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-17 00:57:03.558613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-17 00:57:03.558621 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:03.558629 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-17 00:57:03.558648 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-17 00:57:03.558656 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-17 00:57:03.558662 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:03.558669 | orchestrator | 2026-03-17 00:57:03.558677 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2026-03-17 00:57:03.558684 | orchestrator | Tuesday 17 March 2026 00:51:56 +0000 (0:00:01.648) 0:00:53.773 ********* 2026-03-17 00:57:03.558691 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-17 00:57:03.558707 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-17 00:57:03.558714 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-17 00:57:03.558721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-17 00:57:03.558736 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-17 00:57:03.558744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-17 00:57:03.558751 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:03.558757 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:03.558764 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-17 00:57:03.558776 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-17 00:57:03.558783 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-17 00:57:03.558790 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:03.558797 | orchestrator | 2026-03-17 00:57:03.558803 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2026-03-17 00:57:03.558810 | orchestrator | Tuesday 17 March 2026 00:51:58 +0000 (0:00:01.658) 0:00:55.432 ********* 2026-03-17 00:57:03.558817 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-17 00:57:03.558828 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-17 00:57:03.558838 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-17 00:57:03.558845 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:03.558921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-17 00:57:03.558930 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-17 00:57:03.558944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-17 00:57:03.558952 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:03.558959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-17 00:57:03.558971 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-17 00:57:03.558978 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-17 00:57:03.558985 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:03.558992 | orchestrator | 2026-03-17 00:57:03.559019 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2026-03-17 00:57:03.559031 | orchestrator | Tuesday 17 March 2026 00:51:58 +0000 (0:00:00.899) 0:00:56.332 ********* 2026-03-17 00:57:03.559038 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-17 00:57:03.559046 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-17 00:57:03.559053 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-17 00:57:03.559060 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:03.559072 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-17 00:57:03.559086 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-17 00:57:03.559093 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-17 00:57:03.559100 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:03.559110 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-17 00:57:03.559117 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-17 00:57:03.559124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-17 00:57:03.559131 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:03.559138 | orchestrator | 2026-03-17 00:57:03.559146 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-03-17 00:57:03.559154 | orchestrator | Tuesday 17 March 2026 00:52:00 +0000 (0:00:01.025) 0:00:57.357 ********* 2026-03-17 00:57:03.559162 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-17 00:57:03.559170 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-17 00:57:03.559181 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-17 00:57:03.559194 | orchestrator | 2026-03-17 00:57:03.559203 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-03-17 00:57:03.559211 | orchestrator | Tuesday 17 March 2026 00:52:01 +0000 (0:00:01.324) 0:00:58.682 ********* 2026-03-17 00:57:03.559219 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-17 00:57:03.559227 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-17 00:57:03.559234 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-17 00:57:03.559242 | orchestrator | 2026-03-17 00:57:03.559250 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-03-17 00:57:03.559258 | orchestrator | Tuesday 17 March 2026 00:52:02 +0000 (0:00:01.337) 0:01:00.019 ********* 2026-03-17 00:57:03.559266 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-17 00:57:03.559274 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-17 00:57:03.559282 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-17 00:57:03.559289 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-17 00:57:03.559297 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:03.559305 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-17 00:57:03.559313 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:03.559320 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-17 00:57:03.559328 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:03.559336 | orchestrator | 2026-03-17 00:57:03.559343 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2026-03-17 00:57:03.559351 | orchestrator | Tuesday 17 March 2026 00:52:03 +0000 (0:00:01.290) 0:01:01.309 ********* 2026-03-17 00:57:03.559363 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-17 00:57:03.559371 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-17 00:57:03.559380 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-17 00:57:03.559396 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-17 00:57:03.559405 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-17 00:57:03.559443 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-17 00:57:03.559451 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-17 00:57:03.559476 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-17 00:57:03.559484 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-17 00:57:03.559492 | orchestrator | 2026-03-17 00:57:03.559500 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-03-17 00:57:03.559508 | orchestrator | Tuesday 17 March 2026 00:52:06 +0000 (0:00:02.693) 0:01:04.003 ********* 2026-03-17 00:57:03.559516 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:57:03.559586 | orchestrator | 2026-03-17 00:57:03.559599 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-03-17 00:57:03.559606 | orchestrator | Tuesday 17 March 2026 00:52:07 +0000 (0:00:00.577) 0:01:04.580 ********* 2026-03-17 00:57:03.559614 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-17 00:57:03.559627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-17 00:57:03.559634 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.559641 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.559652 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-17 00:57:03.559660 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-17 00:57:03.559671 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.559708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.559717 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-17 00:57:03.559724 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-17 00:57:03.559731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.559747 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.559766 | orchestrator | 2026-03-17 00:57:03.559777 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-03-17 00:57:03.559787 | orchestrator | Tuesday 17 March 2026 00:52:12 +0000 (0:00:05.482) 0:01:10.063 ********* 2026-03-17 00:57:03.559799 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-17 00:57:03.559818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-17 00:57:03.559829 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.559840 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.559850 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:03.559866 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-17 00:57:03.559878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-17 00:57:03.559896 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.559906 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.559917 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:03.559935 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-17 00:57:03.559946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-17 00:57:03.559957 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.559972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.559989 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:03.560035 | orchestrator | 2026-03-17 00:57:03.560045 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-03-17 00:57:03.560101 | orchestrator | Tuesday 17 March 2026 00:52:13 +0000 (0:00:00.956) 0:01:11.019 ********* 2026-03-17 00:57:03.560110 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-17 00:57:03.560119 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-17 00:57:03.560126 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:03.560133 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-17 00:57:03.560140 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-17 00:57:03.560147 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-17 00:57:03.560158 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-17 00:57:03.560164 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:03.560171 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:03.560178 | orchestrator | 2026-03-17 00:57:03.560190 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-03-17 00:57:03.560197 | orchestrator | Tuesday 17 March 2026 00:52:14 +0000 (0:00:00.888) 0:01:11.908 ********* 2026-03-17 00:57:03.560203 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:57:03.560210 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:57:03.560217 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:57:03.560223 | orchestrator | 2026-03-17 00:57:03.560230 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-03-17 00:57:03.560272 | orchestrator | Tuesday 17 March 2026 00:52:16 +0000 (0:00:01.857) 0:01:13.765 ********* 2026-03-17 00:57:03.560280 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:57:03.560287 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:57:03.560294 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:57:03.560301 | orchestrator | 2026-03-17 00:57:03.560309 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-03-17 00:57:03.560316 | orchestrator | Tuesday 17 March 2026 00:52:18 +0000 (0:00:02.464) 0:01:16.230 ********* 2026-03-17 00:57:03.560323 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:57:03.560330 | orchestrator | 2026-03-17 00:57:03.560338 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-03-17 00:57:03.560365 | orchestrator | Tuesday 17 March 2026 00:52:19 +0000 (0:00:00.544) 0:01:16.774 ********* 2026-03-17 00:57:03.560374 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-17 00:57:03.560392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.560401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.560409 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-17 00:57:03.560421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.560429 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-17 00:57:03.560443 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.560454 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.560462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.560470 | orchestrator | 2026-03-17 00:57:03.560477 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-03-17 00:57:03.560485 | orchestrator | Tuesday 17 March 2026 00:52:23 +0000 (0:00:03.608) 0:01:20.382 ********* 2026-03-17 00:57:03.560498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-17 00:57:03.560506 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.560518 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.560545 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:03.560560 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-17 00:57:03.560568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.560576 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.560583 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:03.560595 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-17 00:57:03.560608 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.560616 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.560624 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:03.560631 | orchestrator | 2026-03-17 00:57:03.560639 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-03-17 00:57:03.560646 | orchestrator | Tuesday 17 March 2026 00:52:25 +0000 (0:00:02.227) 0:01:22.609 ********* 2026-03-17 00:57:03.560657 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-17 00:57:03.560666 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-17 00:57:03.560674 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:03.560681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-17 00:57:03.560689 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-17 00:57:03.560696 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-17 00:57:03.560703 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:03.560711 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-17 00:57:03.560718 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:03.560725 | orchestrator | 2026-03-17 00:57:03.560733 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-03-17 00:57:03.560740 | orchestrator | Tuesday 17 March 2026 00:52:26 +0000 (0:00:00.886) 0:01:23.495 ********* 2026-03-17 00:57:03.560747 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:57:03.560754 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:57:03.560761 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:57:03.560769 | orchestrator | 2026-03-17 00:57:03.560776 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-03-17 00:57:03.560783 | orchestrator | Tuesday 17 March 2026 00:52:27 +0000 (0:00:01.252) 0:01:24.748 ********* 2026-03-17 00:57:03.560790 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:57:03.560802 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:57:03.560809 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:57:03.560816 | orchestrator | 2026-03-17 00:57:03.560827 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-03-17 00:57:03.560835 | orchestrator | Tuesday 17 March 2026 00:52:29 +0000 (0:00:01.851) 0:01:26.599 ********* 2026-03-17 00:57:03.560842 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:03.560849 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:03.560857 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:03.560938 | orchestrator | 2026-03-17 00:57:03.560947 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-03-17 00:57:03.560954 | orchestrator | Tuesday 17 March 2026 00:52:29 +0000 (0:00:00.275) 0:01:26.875 ********* 2026-03-17 00:57:03.560961 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:57:03.560969 | orchestrator | 2026-03-17 00:57:03.560995 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-03-17 00:57:03.561024 | orchestrator | Tuesday 17 March 2026 00:52:30 +0000 (0:00:00.854) 0:01:27.730 ********* 2026-03-17 00:57:03.561037 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-03-17 00:57:03.561057 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-03-17 00:57:03.561095 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-03-17 00:57:03.561104 | orchestrator | 2026-03-17 00:57:03.561111 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-03-17 00:57:03.561119 | orchestrator | Tuesday 17 March 2026 00:52:32 +0000 (0:00:02.553) 0:01:30.283 ********* 2026-03-17 00:57:03.562273 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-03-17 00:57:03.562380 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:03.562397 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-03-17 00:57:03.562408 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:03.562418 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-03-17 00:57:03.562428 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:03.562446 | orchestrator | 2026-03-17 00:57:03.562465 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-03-17 00:57:03.562482 | orchestrator | Tuesday 17 March 2026 00:52:34 +0000 (0:00:01.385) 0:01:31.668 ********* 2026-03-17 00:57:03.562515 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-17 00:57:03.562537 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-17 00:57:03.562557 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:03.562575 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-17 00:57:03.562599 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-17 00:57:03.562610 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:03.562635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-17 00:57:03.562646 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-17 00:57:03.562655 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:03.562665 | orchestrator | 2026-03-17 00:57:03.562675 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-03-17 00:57:03.562685 | orchestrator | Tuesday 17 March 2026 00:52:36 +0000 (0:00:01.686) 0:01:33.355 ********* 2026-03-17 00:57:03.562694 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:03.562703 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:03.562713 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:03.562722 | orchestrator | 2026-03-17 00:57:03.562732 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-03-17 00:57:03.562741 | orchestrator | Tuesday 17 March 2026 00:52:36 +0000 (0:00:00.383) 0:01:33.738 ********* 2026-03-17 00:57:03.562751 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:03.562760 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:03.562770 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:03.562779 | orchestrator | 2026-03-17 00:57:03.562789 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-03-17 00:57:03.562798 | orchestrator | Tuesday 17 March 2026 00:52:37 +0000 (0:00:01.072) 0:01:34.811 ********* 2026-03-17 00:57:03.562808 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:57:03.562817 | orchestrator | 2026-03-17 00:57:03.562827 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-03-17 00:57:03.562836 | orchestrator | Tuesday 17 March 2026 00:52:38 +0000 (0:00:00.803) 0:01:35.615 ********* 2026-03-17 00:57:03.562852 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-17 00:57:03.562871 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.562884 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.562901 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.562912 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-17 00:57:03.562923 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-17 00:57:03.562938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.562954 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.562964 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.562981 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.562992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.563030 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.563049 | orchestrator | 2026-03-17 00:57:03.563059 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-03-17 00:57:03.563069 | orchestrator | Tuesday 17 March 2026 00:52:41 +0000 (0:00:02.936) 0:01:38.551 ********* 2026-03-17 00:57:03.563080 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-17 00:57:03.563114 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.563132 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.563142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-17 00:57:03.563153 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.563169 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:03.563184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.563194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-17 00:57:03.563209 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.563220 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.563230 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.563239 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:03.563250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.563270 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.563280 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:03.563290 | orchestrator | 2026-03-17 00:57:03.563300 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-03-17 00:57:03.563310 | orchestrator | Tuesday 17 March 2026 00:52:41 +0000 (0:00:00.644) 0:01:39.195 ********* 2026-03-17 00:57:03.563320 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-17 00:57:03.563332 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-17 00:57:03.563342 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:03.563352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-17 00:57:03.563362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-17 00:57:03.563372 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:03.563382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-17 00:57:03.563397 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-17 00:57:03.563407 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:03.563417 | orchestrator | 2026-03-17 00:57:03.563427 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-03-17 00:57:03.563443 | orchestrator | Tuesday 17 March 2026 00:52:42 +0000 (0:00:00.923) 0:01:40.119 ********* 2026-03-17 00:57:03.563461 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:57:03.563478 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:57:03.563494 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:57:03.563512 | orchestrator | 2026-03-17 00:57:03.563530 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-03-17 00:57:03.563548 | orchestrator | Tuesday 17 March 2026 00:52:44 +0000 (0:00:01.267) 0:01:41.387 ********* 2026-03-17 00:57:03.563566 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:57:03.563583 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:57:03.563602 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:57:03.563617 | orchestrator | 2026-03-17 00:57:03.563634 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-03-17 00:57:03.563650 | orchestrator | Tuesday 17 March 2026 00:52:46 +0000 (0:00:02.097) 0:01:43.485 ********* 2026-03-17 00:57:03.563665 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:03.563674 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:03.563684 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:03.563693 | orchestrator | 2026-03-17 00:57:03.563703 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-03-17 00:57:03.563712 | orchestrator | Tuesday 17 March 2026 00:52:46 +0000 (0:00:00.340) 0:01:43.825 ********* 2026-03-17 00:57:03.563722 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:03.563731 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:03.563740 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:03.563750 | orchestrator | 2026-03-17 00:57:03.563759 | orchestrator | TASK [include_role : designate] ************************************************ 2026-03-17 00:57:03.563769 | orchestrator | Tuesday 17 March 2026 00:52:46 +0000 (0:00:00.374) 0:01:44.199 ********* 2026-03-17 00:57:03.563778 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:57:03.563788 | orchestrator | 2026-03-17 00:57:03.563797 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-03-17 00:57:03.563807 | orchestrator | Tuesday 17 March 2026 00:52:48 +0000 (0:00:01.499) 0:01:45.698 ********* 2026-03-17 00:57:03.563828 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-17 00:57:03.563839 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-17 00:57:03.563850 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.563869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.563886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.563897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.563907 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.563921 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-17 00:57:03.563932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-17 00:57:03.563947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.563963 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.563973 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.563983 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.563997 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.564041 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-17 00:57:03.564055 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-17 00:57:03.564087 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.564115 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.564137 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.564196 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.564218 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.564229 | orchestrator | 2026-03-17 00:57:03.564240 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-03-17 00:57:03.564251 | orchestrator | Tuesday 17 March 2026 00:52:52 +0000 (0:00:04.218) 0:01:49.917 ********* 2026-03-17 00:57:03.564263 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-17 00:57:03.564292 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-17 00:57:03.564304 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.564315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-17 00:57:03.564332 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-17 00:57:03.564344 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.564356 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.564380 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.564392 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.564403 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.564414 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.564431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.564452 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.564492 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:03.564513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.564534 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:03.564555 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-17 00:57:03.564567 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-17 00:57:03.564579 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.564597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.564609 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.564627 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.564645 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.564657 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:03.564668 | orchestrator | 2026-03-17 00:57:03.564679 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-03-17 00:57:03.564690 | orchestrator | Tuesday 17 March 2026 00:52:53 +0000 (0:00:01.213) 0:01:51.130 ********* 2026-03-17 00:57:03.564702 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-17 00:57:03.564726 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-17 00:57:03.564752 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:03.564770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-17 00:57:03.564789 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-17 00:57:03.564806 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:03.564825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-17 00:57:03.564844 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-17 00:57:03.564863 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:03.564881 | orchestrator | 2026-03-17 00:57:03.564900 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-03-17 00:57:03.564919 | orchestrator | Tuesday 17 March 2026 00:52:55 +0000 (0:00:01.367) 0:01:52.498 ********* 2026-03-17 00:57:03.564937 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:57:03.564957 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:57:03.564977 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:57:03.564995 | orchestrator | 2026-03-17 00:57:03.565122 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-03-17 00:57:03.565143 | orchestrator | Tuesday 17 March 2026 00:52:56 +0000 (0:00:01.133) 0:01:53.632 ********* 2026-03-17 00:57:03.565161 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:57:03.565180 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:57:03.565209 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:57:03.565252 | orchestrator | 2026-03-17 00:57:03.565273 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-03-17 00:57:03.565291 | orchestrator | Tuesday 17 March 2026 00:52:58 +0000 (0:00:01.766) 0:01:55.398 ********* 2026-03-17 00:57:03.565309 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:03.565327 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:03.565346 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:03.565365 | orchestrator | 2026-03-17 00:57:03.565382 | orchestrator | TASK [include_role : glance] *************************************************** 2026-03-17 00:57:03.565402 | orchestrator | Tuesday 17 March 2026 00:52:58 +0000 (0:00:00.295) 0:01:55.694 ********* 2026-03-17 00:57:03.565414 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:57:03.565425 | orchestrator | 2026-03-17 00:57:03.565440 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-03-17 00:57:03.565458 | orchestrator | Tuesday 17 March 2026 00:52:59 +0000 (0:00:00.831) 0:01:56.525 ********* 2026-03-17 00:57:03.565506 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-17 00:57:03.565544 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-17 00:57:03.565581 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-17 00:57:03.565618 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-17 00:57:03.565656 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-17 00:57:03.568156 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-17 00:57:03.568211 | orchestrator | 2026-03-17 00:57:03.568222 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-03-17 00:57:03.568233 | orchestrator | Tuesday 17 March 2026 00:53:02 +0000 (0:00:03.688) 0:02:00.214 ********* 2026-03-17 00:57:03.568249 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-17 00:57:03.568283 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-17 00:57:03.568296 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:03.568311 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-17 00:57:03.568335 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-17 00:57:03.568346 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:03.568357 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-17 00:57:03.568383 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-17 00:57:03.568395 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:03.568405 | orchestrator | 2026-03-17 00:57:03.568414 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-03-17 00:57:03.568424 | orchestrator | Tuesday 17 March 2026 00:53:05 +0000 (0:00:02.632) 0:02:02.846 ********* 2026-03-17 00:57:03.568434 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-17 00:57:03.568445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-17 00:57:03.568461 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:03.568471 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-17 00:57:03.568485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-17 00:57:03.568495 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:03.568505 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-17 00:57:03.568515 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-17 00:57:03.568525 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:03.568535 | orchestrator | 2026-03-17 00:57:03.568544 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-03-17 00:57:03.568554 | orchestrator | Tuesday 17 March 2026 00:53:08 +0000 (0:00:03.088) 0:02:05.935 ********* 2026-03-17 00:57:03.568564 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:57:03.568573 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:57:03.568583 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:57:03.568592 | orchestrator | 2026-03-17 00:57:03.568602 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-03-17 00:57:03.568612 | orchestrator | Tuesday 17 March 2026 00:53:09 +0000 (0:00:01.179) 0:02:07.114 ********* 2026-03-17 00:57:03.568621 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:57:03.568631 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:57:03.568640 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:57:03.568650 | orchestrator | 2026-03-17 00:57:03.568660 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-03-17 00:57:03.568674 | orchestrator | Tuesday 17 March 2026 00:53:11 +0000 (0:00:01.739) 0:02:08.854 ********* 2026-03-17 00:57:03.568684 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:03.568694 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:03.568703 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:03.568713 | orchestrator | 2026-03-17 00:57:03.568722 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-03-17 00:57:03.568732 | orchestrator | Tuesday 17 March 2026 00:53:11 +0000 (0:00:00.275) 0:02:09.129 ********* 2026-03-17 00:57:03.568742 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:57:03.568761 | orchestrator | 2026-03-17 00:57:03.568773 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-03-17 00:57:03.568786 | orchestrator | Tuesday 17 March 2026 00:53:12 +0000 (0:00:00.891) 0:02:10.021 ********* 2026-03-17 00:57:03.568804 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-17 00:57:03.568824 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-17 00:57:03.568848 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-17 00:57:03.568867 | orchestrator | 2026-03-17 00:57:03.568884 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-03-17 00:57:03.568902 | orchestrator | Tuesday 17 March 2026 00:53:15 +0000 (0:00:02.683) 0:02:12.704 ********* 2026-03-17 00:57:03.568919 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-17 00:57:03.568938 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:03.568957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-17 00:57:03.568976 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:03.568987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-17 00:57:03.569022 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:03.569034 | orchestrator | 2026-03-17 00:57:03.569045 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-03-17 00:57:03.569057 | orchestrator | Tuesday 17 March 2026 00:53:15 +0000 (0:00:00.381) 0:02:13.086 ********* 2026-03-17 00:57:03.569067 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-17 00:57:03.569079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-17 00:57:03.569092 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:03.569103 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-17 00:57:03.569114 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-17 00:57:03.569126 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:03.569135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-17 00:57:03.569145 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-17 00:57:03.569159 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:03.569169 | orchestrator | 2026-03-17 00:57:03.569178 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-03-17 00:57:03.569188 | orchestrator | Tuesday 17 March 2026 00:53:16 +0000 (0:00:00.971) 0:02:14.057 ********* 2026-03-17 00:57:03.569198 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:57:03.569207 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:57:03.569217 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:57:03.569226 | orchestrator | 2026-03-17 00:57:03.569236 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-03-17 00:57:03.569245 | orchestrator | Tuesday 17 March 2026 00:53:17 +0000 (0:00:01.235) 0:02:15.293 ********* 2026-03-17 00:57:03.569255 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:57:03.569264 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:57:03.569274 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:57:03.569283 | orchestrator | 2026-03-17 00:57:03.569293 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-03-17 00:57:03.569302 | orchestrator | Tuesday 17 March 2026 00:53:19 +0000 (0:00:02.035) 0:02:17.329 ********* 2026-03-17 00:57:03.569312 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:03.569321 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:03.569331 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:03.569340 | orchestrator | 2026-03-17 00:57:03.569355 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-03-17 00:57:03.569366 | orchestrator | Tuesday 17 March 2026 00:53:20 +0000 (0:00:00.306) 0:02:17.635 ********* 2026-03-17 00:57:03.569377 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:57:03.569388 | orchestrator | 2026-03-17 00:57:03.569398 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-03-17 00:57:03.569409 | orchestrator | Tuesday 17 March 2026 00:53:21 +0000 (0:00:01.116) 0:02:18.752 ********* 2026-03-17 00:57:03.569429 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-17 00:57:03.569449 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-17 00:57:03.569483 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-17 00:57:03.569495 | orchestrator | 2026-03-17 00:57:03.569506 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-03-17 00:57:03.569517 | orchestrator | Tuesday 17 March 2026 00:53:25 +0000 (0:00:04.206) 0:02:22.958 ********* 2026-03-17 00:57:03.569540 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-17 00:57:03.569559 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:03.569571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-17 00:57:03.569587 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:03.569607 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-17 00:57:03.569626 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:03.569637 | orchestrator | 2026-03-17 00:57:03.569648 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-03-17 00:57:03.569658 | orchestrator | Tuesday 17 March 2026 00:53:26 +0000 (0:00:00.868) 0:02:23.827 ********* 2026-03-17 00:57:03.569671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-17 00:57:03.569684 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-17 00:57:03.569696 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-17 00:57:03.569708 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-17 00:57:03.569720 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-17 00:57:03.569731 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:03.569742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-17 00:57:03.569758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-17 00:57:03.569769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-17 00:57:03.569781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-17 00:57:03.569792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-17 00:57:03.569823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-17 00:57:03.569834 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:03.569852 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-17 00:57:03.569863 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-17 00:57:03.569875 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-17 00:57:03.569885 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-17 00:57:03.569896 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:03.569907 | orchestrator | 2026-03-17 00:57:03.569918 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-03-17 00:57:03.569929 | orchestrator | Tuesday 17 March 2026 00:53:27 +0000 (0:00:01.153) 0:02:24.980 ********* 2026-03-17 00:57:03.569940 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:57:03.569950 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:57:03.569961 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:57:03.569972 | orchestrator | 2026-03-17 00:57:03.569982 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-03-17 00:57:03.569993 | orchestrator | Tuesday 17 March 2026 00:53:29 +0000 (0:00:01.947) 0:02:26.927 ********* 2026-03-17 00:57:03.570056 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:57:03.570070 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:57:03.570081 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:57:03.570092 | orchestrator | 2026-03-17 00:57:03.570110 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-03-17 00:57:03.570120 | orchestrator | Tuesday 17 March 2026 00:53:31 +0000 (0:00:02.013) 0:02:28.940 ********* 2026-03-17 00:57:03.570131 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:03.570142 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:03.570153 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:03.570163 | orchestrator | 2026-03-17 00:57:03.570174 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-03-17 00:57:03.570185 | orchestrator | Tuesday 17 March 2026 00:53:32 +0000 (0:00:00.400) 0:02:29.341 ********* 2026-03-17 00:57:03.570196 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:03.570206 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:03.570217 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:03.570228 | orchestrator | 2026-03-17 00:57:03.570238 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-03-17 00:57:03.570253 | orchestrator | Tuesday 17 March 2026 00:53:32 +0000 (0:00:00.316) 0:02:29.657 ********* 2026-03-17 00:57:03.570265 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:57:03.570275 | orchestrator | 2026-03-17 00:57:03.570286 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-03-17 00:57:03.570297 | orchestrator | Tuesday 17 March 2026 00:53:33 +0000 (0:00:00.969) 0:02:30.626 ********* 2026-03-17 00:57:03.570309 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-17 00:57:03.570330 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-17 00:57:03.570343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-17 00:57:03.570361 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-17 00:57:03.570378 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-17 00:57:03.570394 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-17 00:57:03.570414 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-17 00:57:03.570470 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-17 00:57:03.570490 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-17 00:57:03.570520 | orchestrator | 2026-03-17 00:57:03.570536 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-03-17 00:57:03.570554 | orchestrator | Tuesday 17 March 2026 00:53:36 +0000 (0:00:03.472) 0:02:34.099 ********* 2026-03-17 00:57:03.570581 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-17 00:57:03.570601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-17 00:57:03.570618 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-17 00:57:03.570635 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:03.570664 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-17 00:57:03.570685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-17 00:57:03.570705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-17 00:57:03.570716 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:03.570733 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-17 00:57:03.570745 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-17 00:57:03.570757 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-17 00:57:03.570768 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:03.570778 | orchestrator | 2026-03-17 00:57:03.570789 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-03-17 00:57:03.570806 | orchestrator | Tuesday 17 March 2026 00:53:37 +0000 (0:00:01.167) 0:02:35.267 ********* 2026-03-17 00:57:03.570819 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-17 00:57:03.570842 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-17 00:57:03.570853 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:03.570864 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-17 00:57:03.570876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-17 00:57:03.570887 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:03.570898 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-17 00:57:03.570909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-17 00:57:03.570920 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:03.570930 | orchestrator | 2026-03-17 00:57:03.570941 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-03-17 00:57:03.570952 | orchestrator | Tuesday 17 March 2026 00:53:39 +0000 (0:00:01.133) 0:02:36.401 ********* 2026-03-17 00:57:03.570962 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:57:03.570973 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:57:03.570983 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:57:03.570994 | orchestrator | 2026-03-17 00:57:03.571048 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-03-17 00:57:03.571059 | orchestrator | Tuesday 17 March 2026 00:53:40 +0000 (0:00:01.302) 0:02:37.703 ********* 2026-03-17 00:57:03.571070 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:57:03.571086 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:57:03.571098 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:57:03.571108 | orchestrator | 2026-03-17 00:57:03.571119 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-03-17 00:57:03.571130 | orchestrator | Tuesday 17 March 2026 00:53:42 +0000 (0:00:02.091) 0:02:39.795 ********* 2026-03-17 00:57:03.571140 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:03.571151 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:03.571161 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:03.571172 | orchestrator | 2026-03-17 00:57:03.571183 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-03-17 00:57:03.571194 | orchestrator | Tuesday 17 March 2026 00:53:42 +0000 (0:00:00.299) 0:02:40.094 ********* 2026-03-17 00:57:03.571204 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:57:03.571215 | orchestrator | 2026-03-17 00:57:03.571226 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-03-17 00:57:03.571237 | orchestrator | Tuesday 17 March 2026 00:53:44 +0000 (0:00:01.288) 0:02:41.383 ********* 2026-03-17 00:57:03.571248 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-17 00:57:03.571291 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.571316 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-17 00:57:03.571337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.571366 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-17 00:57:03.571387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.571410 | orchestrator | 2026-03-17 00:57:03.571421 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-03-17 00:57:03.571432 | orchestrator | Tuesday 17 March 2026 00:53:48 +0000 (0:00:04.187) 0:02:45.571 ********* 2026-03-17 00:57:03.572365 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-17 00:57:03.572402 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.572416 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:03.572436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-17 00:57:03.572449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.572472 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:03.572543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-17 00:57:03.572558 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.572569 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:03.572580 | orchestrator | 2026-03-17 00:57:03.572591 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-03-17 00:57:03.572602 | orchestrator | Tuesday 17 March 2026 00:53:48 +0000 (0:00:00.683) 0:02:46.255 ********* 2026-03-17 00:57:03.572613 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-17 00:57:03.572625 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-17 00:57:03.572637 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:03.572648 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-17 00:57:03.572659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-17 00:57:03.572670 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:03.572681 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-17 00:57:03.572697 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-17 00:57:03.572708 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:03.572719 | orchestrator | 2026-03-17 00:57:03.572730 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-03-17 00:57:03.572741 | orchestrator | Tuesday 17 March 2026 00:53:50 +0000 (0:00:01.153) 0:02:47.408 ********* 2026-03-17 00:57:03.572758 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:57:03.572768 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:57:03.572780 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:57:03.572790 | orchestrator | 2026-03-17 00:57:03.572801 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-03-17 00:57:03.572812 | orchestrator | Tuesday 17 March 2026 00:53:51 +0000 (0:00:01.436) 0:02:48.844 ********* 2026-03-17 00:57:03.572823 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:57:03.572834 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:57:03.572845 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:57:03.572856 | orchestrator | 2026-03-17 00:57:03.572866 | orchestrator | TASK [include_role : manila] *************************************************** 2026-03-17 00:57:03.572877 | orchestrator | Tuesday 17 March 2026 00:53:53 +0000 (0:00:02.471) 0:02:51.316 ********* 2026-03-17 00:57:03.572888 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:57:03.572899 | orchestrator | 2026-03-17 00:57:03.572910 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-03-17 00:57:03.572921 | orchestrator | Tuesday 17 March 2026 00:53:55 +0000 (0:00:01.180) 0:02:52.497 ********* 2026-03-17 00:57:03.572932 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-17 00:57:03.573076 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.573096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.573108 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.573125 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-17 00:57:03.573145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.573157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.573238 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.573254 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-17 00:57:03.573265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.573288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.573299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.573309 | orchestrator | 2026-03-17 00:57:03.573319 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-03-17 00:57:03.573329 | orchestrator | Tuesday 17 March 2026 00:54:02 +0000 (0:00:06.881) 0:02:59.379 ********* 2026-03-17 00:57:03.573413 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-17 00:57:03.573436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.573453 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.573469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.573494 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:03.573517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-17 00:57:03.573536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-17 00:57:03.573616 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.573637 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.573653 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.573681 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.573698 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:03.573722 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.573740 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.573756 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:03.573772 | orchestrator | 2026-03-17 00:57:03.573785 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-03-17 00:57:03.573795 | orchestrator | Tuesday 17 March 2026 00:54:03 +0000 (0:00:01.597) 0:03:00.976 ********* 2026-03-17 00:57:03.573805 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-17 00:57:03.573815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-17 00:57:03.573825 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:03.573835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-17 00:57:03.573917 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-17 00:57:03.573931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-17 00:57:03.573942 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:03.573951 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-17 00:57:03.573961 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:03.573971 | orchestrator | 2026-03-17 00:57:03.573981 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-03-17 00:57:03.574051 | orchestrator | Tuesday 17 March 2026 00:54:04 +0000 (0:00:01.276) 0:03:02.252 ********* 2026-03-17 00:57:03.574064 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:57:03.574076 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:57:03.574085 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:57:03.574095 | orchestrator | 2026-03-17 00:57:03.574104 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-03-17 00:57:03.574114 | orchestrator | Tuesday 17 March 2026 00:54:06 +0000 (0:00:01.597) 0:03:03.850 ********* 2026-03-17 00:57:03.574123 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:57:03.574133 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:57:03.574142 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:57:03.574152 | orchestrator | 2026-03-17 00:57:03.574161 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-03-17 00:57:03.574171 | orchestrator | Tuesday 17 March 2026 00:54:08 +0000 (0:00:02.247) 0:03:06.097 ********* 2026-03-17 00:57:03.574180 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:57:03.574190 | orchestrator | 2026-03-17 00:57:03.574199 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-03-17 00:57:03.574209 | orchestrator | Tuesday 17 March 2026 00:54:09 +0000 (0:00:01.208) 0:03:07.305 ********* 2026-03-17 00:57:03.574219 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-17 00:57:03.574229 | orchestrator | 2026-03-17 00:57:03.574239 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-03-17 00:57:03.574248 | orchestrator | Tuesday 17 March 2026 00:54:13 +0000 (0:00:03.150) 0:03:10.456 ********* 2026-03-17 00:57:03.574265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-17 00:57:03.574346 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-17 00:57:03.574373 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:03.574384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-17 00:57:03.574400 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-17 00:57:03.574410 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:03.574482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-17 00:57:03.574504 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-17 00:57:03.574514 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:03.574524 | orchestrator | 2026-03-17 00:57:03.574533 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-03-17 00:57:03.574543 | orchestrator | Tuesday 17 March 2026 00:54:16 +0000 (0:00:03.416) 0:03:13.872 ********* 2026-03-17 00:57:03.574558 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-17 00:57:03.574569 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-17 00:57:03.574579 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:03.574692 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-17 00:57:03.574714 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-17 00:57:03.574725 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:03.574740 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-17 00:57:03.574817 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-17 00:57:03.574832 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:03.574842 | orchestrator | 2026-03-17 00:57:03.574852 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-03-17 00:57:03.574861 | orchestrator | Tuesday 17 March 2026 00:54:19 +0000 (0:00:02.664) 0:03:16.537 ********* 2026-03-17 00:57:03.574871 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-17 00:57:03.574882 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-17 00:57:03.574892 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:03.574906 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-17 00:57:03.574918 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-17 00:57:03.574927 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:03.574937 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-17 00:57:03.575100 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-17 00:57:03.575118 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:03.575128 | orchestrator | 2026-03-17 00:57:03.575138 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-03-17 00:57:03.575149 | orchestrator | Tuesday 17 March 2026 00:54:21 +0000 (0:00:01.942) 0:03:18.480 ********* 2026-03-17 00:57:03.575159 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:57:03.575168 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:57:03.575178 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:57:03.575188 | orchestrator | 2026-03-17 00:57:03.575197 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-03-17 00:57:03.575207 | orchestrator | Tuesday 17 March 2026 00:54:22 +0000 (0:00:01.795) 0:03:20.275 ********* 2026-03-17 00:57:03.575217 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:03.575226 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:03.575236 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:03.575246 | orchestrator | 2026-03-17 00:57:03.575255 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-03-17 00:57:03.575265 | orchestrator | Tuesday 17 March 2026 00:54:24 +0000 (0:00:01.493) 0:03:21.769 ********* 2026-03-17 00:57:03.575274 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:03.575284 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:03.575294 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:03.575303 | orchestrator | 2026-03-17 00:57:03.575313 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-03-17 00:57:03.575322 | orchestrator | Tuesday 17 March 2026 00:54:24 +0000 (0:00:00.278) 0:03:22.048 ********* 2026-03-17 00:57:03.575332 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:57:03.575342 | orchestrator | 2026-03-17 00:57:03.575352 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-03-17 00:57:03.575361 | orchestrator | Tuesday 17 March 2026 00:54:25 +0000 (0:00:01.136) 0:03:23.185 ********* 2026-03-17 00:57:03.575372 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-17 00:57:03.575389 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-17 00:57:03.575408 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-17 00:57:03.575418 | orchestrator | 2026-03-17 00:57:03.575427 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-03-17 00:57:03.575437 | orchestrator | Tuesday 17 March 2026 00:54:27 +0000 (0:00:01.402) 0:03:24.587 ********* 2026-03-17 00:57:03.575519 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-17 00:57:03.575533 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:03.575544 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-17 00:57:03.575554 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:03.575564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-17 00:57:03.575574 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:03.575583 | orchestrator | 2026-03-17 00:57:03.575591 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-03-17 00:57:03.575604 | orchestrator | Tuesday 17 March 2026 00:54:27 +0000 (0:00:00.339) 0:03:24.927 ********* 2026-03-17 00:57:03.575617 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-17 00:57:03.575625 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:03.575633 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-17 00:57:03.575642 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:03.575650 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-17 00:57:03.575658 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:03.575666 | orchestrator | 2026-03-17 00:57:03.575673 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-03-17 00:57:03.575682 | orchestrator | Tuesday 17 March 2026 00:54:28 +0000 (0:00:00.774) 0:03:25.701 ********* 2026-03-17 00:57:03.575689 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:03.575698 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:03.575706 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:03.575713 | orchestrator | 2026-03-17 00:57:03.575721 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-03-17 00:57:03.575729 | orchestrator | Tuesday 17 March 2026 00:54:28 +0000 (0:00:00.366) 0:03:26.067 ********* 2026-03-17 00:57:03.575737 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:03.575745 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:03.575753 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:03.575761 | orchestrator | 2026-03-17 00:57:03.575769 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-03-17 00:57:03.575776 | orchestrator | Tuesday 17 March 2026 00:54:29 +0000 (0:00:01.057) 0:03:27.125 ********* 2026-03-17 00:57:03.575784 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:03.575792 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:03.575800 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:03.575808 | orchestrator | 2026-03-17 00:57:03.575816 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-03-17 00:57:03.575871 | orchestrator | Tuesday 17 March 2026 00:54:30 +0000 (0:00:00.254) 0:03:27.380 ********* 2026-03-17 00:57:03.575883 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:57:03.575891 | orchestrator | 2026-03-17 00:57:03.575899 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-03-17 00:57:03.575907 | orchestrator | Tuesday 17 March 2026 00:54:31 +0000 (0:00:01.303) 0:03:28.683 ********* 2026-03-17 00:57:03.575915 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-17 00:57:03.575930 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-17 00:57:03.575943 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.575952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.576041 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.576064 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.576092 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.576110 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.576125 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-17 00:57:03.576139 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-17 00:57:03.576244 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.576263 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.576280 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-17 00:57:03.576290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-17 00:57:03.576304 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-17 00:57:03.576319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-17 00:57:03.576335 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.576429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.576448 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-17 00:57:03.576470 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-17 00:57:03.576484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.576493 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.576501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-17 00:57:03.576567 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-17 00:57:03.576579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-17 00:57:03.576593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-17 00:57:03.576601 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-17 00:57:03.576614 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.576623 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.576631 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.576708 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-17 00:57:03.576736 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.576751 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-17 00:57:03.576789 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-17 00:57:03.576805 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.576896 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-17 00:57:03.576925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-17 00:57:03.576936 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.576951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-17 00:57:03.576960 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-17 00:57:03.576968 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.577041 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-17 00:57:03.577058 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.577067 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-17 00:57:03.577075 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-17 00:57:03.577088 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.577097 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-17 00:57:03.577142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-17 00:57:03.577157 | orchestrator | 2026-03-17 00:57:03.577166 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-03-17 00:57:03.577174 | orchestrator | Tuesday 17 March 2026 00:54:35 +0000 (0:00:04.063) 0:03:32.747 ********* 2026-03-17 00:57:03.577183 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-17 00:57:03.577191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.577208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.577216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.577282 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-17 00:57:03.577300 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.577309 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-17 00:57:03.577318 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-17 00:57:03.577330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.577339 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-17 00:57:03.577347 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-17 00:57:03.577413 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.577426 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.577434 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.577447 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-17 00:57:03.577456 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.577467 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-17 00:57:03.577549 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-17 00:57:03.577572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.577586 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.577607 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-17 00:57:03.577621 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-17 00:57:03.577645 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-17 00:57:03.577708 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-17 00:57:03.577719 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:03.577728 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-17 00:57:03.577736 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.577750 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.577758 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.577773 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-17 00:57:03.577817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.577828 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.577837 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-17 00:57:03.577849 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-17 00:57:03.577858 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.577871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-17 00:57:03.577934 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-17 00:57:03.577946 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-17 00:57:03.577954 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.577963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.577977 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-17 00:57:03.577991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-17 00:57:03.578168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-17 00:57:03.578185 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:03.578193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.578201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-17 00:57:03.578208 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-17 00:57:03.578221 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.578238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-17 00:57:03.578265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-17 00:57:03.578274 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:03.578281 | orchestrator | 2026-03-17 00:57:03.578288 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-03-17 00:57:03.578296 | orchestrator | Tuesday 17 March 2026 00:54:37 +0000 (0:00:01.606) 0:03:34.353 ********* 2026-03-17 00:57:03.578303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-17 00:57:03.578311 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-17 00:57:03.578319 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:03.578326 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-17 00:57:03.578333 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-17 00:57:03.578339 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:03.578346 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-17 00:57:03.578354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-17 00:57:03.578361 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:03.578368 | orchestrator | 2026-03-17 00:57:03.578375 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-03-17 00:57:03.578382 | orchestrator | Tuesday 17 March 2026 00:54:38 +0000 (0:00:01.242) 0:03:35.596 ********* 2026-03-17 00:57:03.578389 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:57:03.578403 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:57:03.578410 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:57:03.578417 | orchestrator | 2026-03-17 00:57:03.578424 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-03-17 00:57:03.578431 | orchestrator | Tuesday 17 March 2026 00:54:39 +0000 (0:00:01.267) 0:03:36.863 ********* 2026-03-17 00:57:03.578438 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:57:03.578445 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:57:03.578455 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:57:03.578466 | orchestrator | 2026-03-17 00:57:03.578480 | orchestrator | TASK [include_role : placement] ************************************************ 2026-03-17 00:57:03.578492 | orchestrator | Tuesday 17 March 2026 00:54:41 +0000 (0:00:01.920) 0:03:38.784 ********* 2026-03-17 00:57:03.578504 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:57:03.578515 | orchestrator | 2026-03-17 00:57:03.578526 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-03-17 00:57:03.578538 | orchestrator | Tuesday 17 March 2026 00:54:42 +0000 (0:00:01.219) 0:03:40.003 ********* 2026-03-17 00:57:03.578546 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-17 00:57:03.578578 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-17 00:57:03.578588 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-17 00:57:03.578597 | orchestrator | 2026-03-17 00:57:03.578605 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-03-17 00:57:03.578619 | orchestrator | Tuesday 17 March 2026 00:54:45 +0000 (0:00:03.092) 0:03:43.096 ********* 2026-03-17 00:57:03.578630 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-17 00:57:03.578639 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:03.578652 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-17 00:57:03.578665 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:03.578704 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-17 00:57:03.578718 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:03.578728 | orchestrator | 2026-03-17 00:57:03.578739 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-03-17 00:57:03.578750 | orchestrator | Tuesday 17 March 2026 00:54:46 +0000 (0:00:00.499) 0:03:43.595 ********* 2026-03-17 00:57:03.578762 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-17 00:57:03.578773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-17 00:57:03.578786 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:03.578798 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-17 00:57:03.578813 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-17 00:57:03.578820 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:03.578827 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-17 00:57:03.578834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-17 00:57:03.578841 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:03.578848 | orchestrator | 2026-03-17 00:57:03.578855 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-03-17 00:57:03.578861 | orchestrator | Tuesday 17 March 2026 00:54:47 +0000 (0:00:01.468) 0:03:45.063 ********* 2026-03-17 00:57:03.578868 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:57:03.578876 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:57:03.578888 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:57:03.578898 | orchestrator | 2026-03-17 00:57:03.578909 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-03-17 00:57:03.578920 | orchestrator | Tuesday 17 March 2026 00:54:49 +0000 (0:00:01.406) 0:03:46.470 ********* 2026-03-17 00:57:03.578932 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:57:03.578943 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:57:03.578960 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:57:03.578971 | orchestrator | 2026-03-17 00:57:03.578981 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-03-17 00:57:03.578988 | orchestrator | Tuesday 17 March 2026 00:54:51 +0000 (0:00:01.930) 0:03:48.400 ********* 2026-03-17 00:57:03.578994 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:57:03.579028 | orchestrator | 2026-03-17 00:57:03.579036 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-03-17 00:57:03.579043 | orchestrator | Tuesday 17 March 2026 00:54:52 +0000 (0:00:01.273) 0:03:49.674 ********* 2026-03-17 00:57:03.579052 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-17 00:57:03.579088 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-17 00:57:03.579105 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.579113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.579124 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-17 00:57:03.579132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.579159 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.579172 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.579179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.579186 | orchestrator | 2026-03-17 00:57:03.579193 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-03-17 00:57:03.579200 | orchestrator | Tuesday 17 March 2026 00:54:55 +0000 (0:00:03.436) 0:03:53.111 ********* 2026-03-17 00:57:03.579210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-17 00:57:03.579218 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.579243 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.579256 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:03.579264 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-17 00:57:03.579271 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.579282 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.579289 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:03.579296 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-17 00:57:03.579327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.579336 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.579344 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:03.579350 | orchestrator | 2026-03-17 00:57:03.579357 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-03-17 00:57:03.579364 | orchestrator | Tuesday 17 March 2026 00:54:56 +0000 (0:00:00.543) 0:03:53.654 ********* 2026-03-17 00:57:03.579371 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-17 00:57:03.579379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-17 00:57:03.579386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-17 00:57:03.579393 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-17 00:57:03.579400 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:03.579410 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-17 00:57:03.579418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-17 00:57:03.579425 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-17 00:57:03.579431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-17 00:57:03.579438 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:03.579445 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-17 00:57:03.579459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-17 00:57:03.579466 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-17 00:57:03.579473 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-17 00:57:03.579480 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:03.579487 | orchestrator | 2026-03-17 00:57:03.579494 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-03-17 00:57:03.579519 | orchestrator | Tuesday 17 March 2026 00:54:57 +0000 (0:00:00.796) 0:03:54.450 ********* 2026-03-17 00:57:03.579527 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:57:03.579533 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:57:03.579540 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:57:03.579547 | orchestrator | 2026-03-17 00:57:03.579554 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-03-17 00:57:03.579561 | orchestrator | Tuesday 17 March 2026 00:54:58 +0000 (0:00:01.582) 0:03:56.033 ********* 2026-03-17 00:57:03.579567 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:57:03.579574 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:57:03.579581 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:57:03.579587 | orchestrator | 2026-03-17 00:57:03.579594 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-03-17 00:57:03.579601 | orchestrator | Tuesday 17 March 2026 00:55:00 +0000 (0:00:01.973) 0:03:58.007 ********* 2026-03-17 00:57:03.579608 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:57:03.579615 | orchestrator | 2026-03-17 00:57:03.579621 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-03-17 00:57:03.579628 | orchestrator | Tuesday 17 March 2026 00:55:01 +0000 (0:00:01.173) 0:03:59.180 ********* 2026-03-17 00:57:03.579635 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-03-17 00:57:03.579642 | orchestrator | 2026-03-17 00:57:03.579650 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-03-17 00:57:03.579656 | orchestrator | Tuesday 17 March 2026 00:55:02 +0000 (0:00:01.139) 0:04:00.319 ********* 2026-03-17 00:57:03.579663 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-17 00:57:03.579671 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-17 00:57:03.579681 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-17 00:57:03.579693 | orchestrator | 2026-03-17 00:57:03.579701 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-03-17 00:57:03.579707 | orchestrator | Tuesday 17 March 2026 00:55:06 +0000 (0:00:03.477) 0:04:03.797 ********* 2026-03-17 00:57:03.579714 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-17 00:57:03.579721 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:03.579729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-17 00:57:03.579736 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:03.579761 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-17 00:57:03.579770 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:03.579777 | orchestrator | 2026-03-17 00:57:03.579784 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-03-17 00:57:03.579791 | orchestrator | Tuesday 17 March 2026 00:55:07 +0000 (0:00:01.157) 0:04:04.955 ********* 2026-03-17 00:57:03.579798 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-17 00:57:03.579805 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-17 00:57:03.579812 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:03.579819 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-17 00:57:03.579826 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-17 00:57:03.579833 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:03.579840 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-17 00:57:03.579852 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-17 00:57:03.579859 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:03.579866 | orchestrator | 2026-03-17 00:57:03.579876 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-17 00:57:03.579883 | orchestrator | Tuesday 17 March 2026 00:55:09 +0000 (0:00:01.840) 0:04:06.796 ********* 2026-03-17 00:57:03.579890 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:57:03.579896 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:57:03.579903 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:57:03.579910 | orchestrator | 2026-03-17 00:57:03.579917 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-17 00:57:03.579923 | orchestrator | Tuesday 17 March 2026 00:55:11 +0000 (0:00:02.327) 0:04:09.123 ********* 2026-03-17 00:57:03.579930 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:57:03.579937 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:57:03.579944 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:57:03.579950 | orchestrator | 2026-03-17 00:57:03.579957 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-03-17 00:57:03.579963 | orchestrator | Tuesday 17 March 2026 00:55:14 +0000 (0:00:03.155) 0:04:12.279 ********* 2026-03-17 00:57:03.579971 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-03-17 00:57:03.579978 | orchestrator | 2026-03-17 00:57:03.579984 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-03-17 00:57:03.579991 | orchestrator | Tuesday 17 March 2026 00:55:15 +0000 (0:00:00.847) 0:04:13.126 ********* 2026-03-17 00:57:03.580018 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-17 00:57:03.580027 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:03.580055 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-17 00:57:03.580064 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:03.580071 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-17 00:57:03.580078 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:03.580084 | orchestrator | 2026-03-17 00:57:03.580091 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-03-17 00:57:03.580105 | orchestrator | Tuesday 17 March 2026 00:55:17 +0000 (0:00:01.428) 0:04:14.554 ********* 2026-03-17 00:57:03.580112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-17 00:57:03.580119 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:03.580126 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-17 00:57:03.580133 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:03.580144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-17 00:57:03.580151 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:03.580157 | orchestrator | 2026-03-17 00:57:03.580164 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-03-17 00:57:03.580171 | orchestrator | Tuesday 17 March 2026 00:55:18 +0000 (0:00:01.725) 0:04:16.280 ********* 2026-03-17 00:57:03.580177 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:03.580184 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:03.580191 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:03.580197 | orchestrator | 2026-03-17 00:57:03.580204 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-17 00:57:03.580211 | orchestrator | Tuesday 17 March 2026 00:55:20 +0000 (0:00:01.191) 0:04:17.471 ********* 2026-03-17 00:57:03.580218 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:57:03.580225 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:57:03.580232 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:57:03.580238 | orchestrator | 2026-03-17 00:57:03.580245 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-17 00:57:03.580252 | orchestrator | Tuesday 17 March 2026 00:55:22 +0000 (0:00:02.233) 0:04:19.705 ********* 2026-03-17 00:57:03.580258 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:57:03.580265 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:57:03.580272 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:57:03.580278 | orchestrator | 2026-03-17 00:57:03.580285 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-03-17 00:57:03.580292 | orchestrator | Tuesday 17 March 2026 00:55:24 +0000 (0:00:02.624) 0:04:22.329 ********* 2026-03-17 00:57:03.580298 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-03-17 00:57:03.580305 | orchestrator | 2026-03-17 00:57:03.580312 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-03-17 00:57:03.580319 | orchestrator | Tuesday 17 March 2026 00:55:25 +0000 (0:00:00.715) 0:04:23.045 ********* 2026-03-17 00:57:03.580351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-17 00:57:03.580360 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:03.580367 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-17 00:57:03.580373 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:03.580380 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-17 00:57:03.580387 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:03.580394 | orchestrator | 2026-03-17 00:57:03.580400 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-03-17 00:57:03.580407 | orchestrator | Tuesday 17 March 2026 00:55:26 +0000 (0:00:01.120) 0:04:24.166 ********* 2026-03-17 00:57:03.580418 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-17 00:57:03.580425 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:03.580432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-17 00:57:03.580439 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:03.580445 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-17 00:57:03.580453 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:03.580459 | orchestrator | 2026-03-17 00:57:03.580471 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-03-17 00:57:03.580478 | orchestrator | Tuesday 17 March 2026 00:55:27 +0000 (0:00:01.069) 0:04:25.235 ********* 2026-03-17 00:57:03.580484 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:03.580491 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:03.580498 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:03.580504 | orchestrator | 2026-03-17 00:57:03.580511 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-17 00:57:03.580518 | orchestrator | Tuesday 17 March 2026 00:55:29 +0000 (0:00:01.456) 0:04:26.692 ********* 2026-03-17 00:57:03.580525 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:57:03.580551 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:57:03.580559 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:57:03.580566 | orchestrator | 2026-03-17 00:57:03.580572 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-17 00:57:03.580579 | orchestrator | Tuesday 17 March 2026 00:55:31 +0000 (0:00:02.575) 0:04:29.267 ********* 2026-03-17 00:57:03.580586 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:57:03.580593 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:57:03.580599 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:57:03.580606 | orchestrator | 2026-03-17 00:57:03.580613 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-03-17 00:57:03.580625 | orchestrator | Tuesday 17 March 2026 00:55:34 +0000 (0:00:02.752) 0:04:32.020 ********* 2026-03-17 00:57:03.580636 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:57:03.580652 | orchestrator | 2026-03-17 00:57:03.580668 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-03-17 00:57:03.580678 | orchestrator | Tuesday 17 March 2026 00:55:35 +0000 (0:00:01.229) 0:04:33.249 ********* 2026-03-17 00:57:03.580689 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-17 00:57:03.580701 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-17 00:57:03.580717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-17 00:57:03.580728 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-17 00:57:03.580751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.580800 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-17 00:57:03.580816 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-17 00:57:03.580828 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-17 00:57:03.580843 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-17 00:57:03.580857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-17 00:57:03.580864 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-17 00:57:03.580892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-17 00:57:03.580900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-17 00:57:03.580907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.580918 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.580925 | orchestrator | 2026-03-17 00:57:03.580932 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-03-17 00:57:03.580939 | orchestrator | Tuesday 17 March 2026 00:55:39 +0000 (0:00:03.427) 0:04:36.677 ********* 2026-03-17 00:57:03.580953 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-17 00:57:03.580960 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-17 00:57:03.580986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-17 00:57:03.580995 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-17 00:57:03.581019 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.581026 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:03.581037 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-17 00:57:03.581049 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-17 00:57:03.581056 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-17 00:57:03.581082 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-17 00:57:03.581091 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.581098 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:03.581105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-17 00:57:03.581116 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-17 00:57:03.581128 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-17 00:57:03.581135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-17 00:57:03.581142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-17 00:57:03.581166 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:03.581175 | orchestrator | 2026-03-17 00:57:03.581181 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-03-17 00:57:03.581188 | orchestrator | Tuesday 17 March 2026 00:55:40 +0000 (0:00:01.037) 0:04:37.715 ********* 2026-03-17 00:57:03.581195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-17 00:57:03.581202 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-17 00:57:03.581210 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:03.581216 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-17 00:57:03.581223 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-17 00:57:03.581230 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:03.581237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-17 00:57:03.581244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-17 00:57:03.581256 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:03.581263 | orchestrator | 2026-03-17 00:57:03.581269 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-03-17 00:57:03.581276 | orchestrator | Tuesday 17 March 2026 00:55:41 +0000 (0:00:00.882) 0:04:38.597 ********* 2026-03-17 00:57:03.581282 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:57:03.581289 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:57:03.581295 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:57:03.581302 | orchestrator | 2026-03-17 00:57:03.581309 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-03-17 00:57:03.581315 | orchestrator | Tuesday 17 March 2026 00:55:42 +0000 (0:00:01.428) 0:04:40.026 ********* 2026-03-17 00:57:03.581322 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:57:03.581328 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:57:03.581335 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:57:03.581342 | orchestrator | 2026-03-17 00:57:03.581352 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-03-17 00:57:03.581359 | orchestrator | Tuesday 17 March 2026 00:55:44 +0000 (0:00:02.063) 0:04:42.089 ********* 2026-03-17 00:57:03.581366 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:57:03.581372 | orchestrator | 2026-03-17 00:57:03.581379 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-03-17 00:57:03.581386 | orchestrator | Tuesday 17 March 2026 00:55:46 +0000 (0:00:01.421) 0:04:43.511 ********* 2026-03-17 00:57:03.581393 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-17 00:57:03.581420 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-17 00:57:03.581429 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-17 00:57:03.581441 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-17 00:57:03.581510 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-17 00:57:03.581552 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-17 00:57:03.581561 | orchestrator | 2026-03-17 00:57:03.581568 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-03-17 00:57:03.581575 | orchestrator | Tuesday 17 March 2026 00:55:50 +0000 (0:00:04.790) 0:04:48.301 ********* 2026-03-17 00:57:03.581582 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-17 00:57:03.581595 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-17 00:57:03.581602 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:03.581615 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-17 00:57:03.581623 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-17 00:57:03.581630 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:03.581655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-17 00:57:03.581668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-17 00:57:03.581675 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:03.581682 | orchestrator | 2026-03-17 00:57:03.581688 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-03-17 00:57:03.581695 | orchestrator | Tuesday 17 March 2026 00:55:51 +0000 (0:00:00.956) 0:04:49.258 ********* 2026-03-17 00:57:03.581706 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-17 00:57:03.581713 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-17 00:57:03.581720 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-17 00:57:03.581728 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-17 00:57:03.581735 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:03.581742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-17 00:57:03.581749 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-17 00:57:03.581756 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:03.581763 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-17 00:57:03.581770 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-17 00:57:03.581795 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-17 00:57:03.581808 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:03.581815 | orchestrator | 2026-03-17 00:57:03.581821 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-03-17 00:57:03.581828 | orchestrator | Tuesday 17 March 2026 00:55:53 +0000 (0:00:01.266) 0:04:50.524 ********* 2026-03-17 00:57:03.581835 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:03.581842 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:03.581848 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:03.581855 | orchestrator | 2026-03-17 00:57:03.581862 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-03-17 00:57:03.581868 | orchestrator | Tuesday 17 March 2026 00:55:53 +0000 (0:00:00.440) 0:04:50.964 ********* 2026-03-17 00:57:03.581875 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:03.581882 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:03.581888 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:03.581895 | orchestrator | 2026-03-17 00:57:03.581902 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-03-17 00:57:03.581908 | orchestrator | Tuesday 17 March 2026 00:55:54 +0000 (0:00:01.333) 0:04:52.298 ********* 2026-03-17 00:57:03.581915 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:57:03.581922 | orchestrator | 2026-03-17 00:57:03.581928 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-03-17 00:57:03.581935 | orchestrator | Tuesday 17 March 2026 00:55:56 +0000 (0:00:01.648) 0:04:53.946 ********* 2026-03-17 00:57:03.581942 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-17 00:57:03.581953 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-17 00:57:03.581960 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-17 00:57:03.581968 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:57:03.582036 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-17 00:57:03.582046 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:57:03.582053 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:57:03.582061 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-17 00:57:03.582068 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:57:03.582078 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-17 00:57:03.582086 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-17 00:57:03.582098 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-17 00:57:03.582131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:57:03.582139 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:57:03.582146 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-17 00:57:03.582157 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-17 00:57:03.582165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-17 00:57:03.582181 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-17 00:57:03.582189 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:57:03.582196 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-17 00:57:03.582203 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:57:03.582213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:57:03.582221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-17 00:57:03.582232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:57:03.582239 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-17 00:57:03.582252 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-17 00:57:03.582259 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-17 00:57:03.582271 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:57:03.582279 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:57:03.582291 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-17 00:57:03.582298 | orchestrator | 2026-03-17 00:57:03.582305 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-03-17 00:57:03.582312 | orchestrator | Tuesday 17 March 2026 00:56:00 +0000 (0:00:03.957) 0:04:57.903 ********* 2026-03-17 00:57:03.582322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-17 00:57:03.582330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-17 00:57:03.582337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:57:03.582348 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:57:03.582355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-17 00:57:03.582366 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-17 00:57:03.582378 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-17 00:57:03.582389 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:57:03.582396 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-17 00:57:03.582403 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:57:03.582414 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-17 00:57:03.582421 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-17 00:57:03.582432 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:03.582439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:57:03.582446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:57:03.582457 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-17 00:57:03.582465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-17 00:57:03.582472 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-17 00:57:03.582487 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:57:03.582494 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-17 00:57:03.582501 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:57:03.582512 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-17 00:57:03.582519 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-17 00:57:03.582526 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:03.582533 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:57:03.582540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:57:03.582551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-17 00:57:03.582565 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-17 00:57:03.582576 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-17 00:57:03.582583 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:57:03.582590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:57:03.582597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-17 00:57:03.582604 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:03.582616 | orchestrator | 2026-03-17 00:57:03.582623 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-03-17 00:57:03.582630 | orchestrator | Tuesday 17 March 2026 00:56:01 +0000 (0:00:00.799) 0:04:58.703 ********* 2026-03-17 00:57:03.582637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-17 00:57:03.582644 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-17 00:57:03.582655 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-17 00:57:03.582663 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-17 00:57:03.582671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-17 00:57:03.582678 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:03.582685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-17 00:57:03.582692 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-17 00:57:03.582699 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-17 00:57:03.582705 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:03.582716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-17 00:57:03.582723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-17 00:57:03.582730 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-17 00:57:03.582737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-17 00:57:03.582744 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:03.582751 | orchestrator | 2026-03-17 00:57:03.582757 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-03-17 00:57:03.582764 | orchestrator | Tuesday 17 March 2026 00:56:02 +0000 (0:00:01.088) 0:04:59.791 ********* 2026-03-17 00:57:03.582775 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:03.582782 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:03.582788 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:03.582795 | orchestrator | 2026-03-17 00:57:03.582802 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-03-17 00:57:03.582809 | orchestrator | Tuesday 17 March 2026 00:56:02 +0000 (0:00:00.426) 0:05:00.218 ********* 2026-03-17 00:57:03.582815 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:03.582822 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:03.582829 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:03.582835 | orchestrator | 2026-03-17 00:57:03.582842 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-03-17 00:57:03.582849 | orchestrator | Tuesday 17 March 2026 00:56:03 +0000 (0:00:01.076) 0:05:01.295 ********* 2026-03-17 00:57:03.582855 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:57:03.582862 | orchestrator | 2026-03-17 00:57:03.582869 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-03-17 00:57:03.582875 | orchestrator | Tuesday 17 March 2026 00:56:05 +0000 (0:00:01.299) 0:05:02.594 ********* 2026-03-17 00:57:03.582886 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-17 00:57:03.582894 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-17 00:57:03.582905 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-17 00:57:03.582917 | orchestrator | 2026-03-17 00:57:03.582924 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-03-17 00:57:03.582931 | orchestrator | Tuesday 17 March 2026 00:56:07 +0000 (0:00:02.592) 0:05:05.187 ********* 2026-03-17 00:57:03.582937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-17 00:57:03.582945 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:03.582957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-17 00:57:03.582965 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:03.582972 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-17 00:57:03.582979 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:03.582985 | orchestrator | 2026-03-17 00:57:03.582992 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-03-17 00:57:03.583042 | orchestrator | Tuesday 17 March 2026 00:56:08 +0000 (0:00:00.444) 0:05:05.631 ********* 2026-03-17 00:57:03.583054 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-17 00:57:03.583067 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:03.583075 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-17 00:57:03.583081 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:03.583088 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-17 00:57:03.583095 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:03.583102 | orchestrator | 2026-03-17 00:57:03.583109 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-03-17 00:57:03.583116 | orchestrator | Tuesday 17 March 2026 00:56:08 +0000 (0:00:00.639) 0:05:06.270 ********* 2026-03-17 00:57:03.583122 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:03.583129 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:03.583136 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:03.583142 | orchestrator | 2026-03-17 00:57:03.583149 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-03-17 00:57:03.583156 | orchestrator | Tuesday 17 March 2026 00:56:09 +0000 (0:00:00.787) 0:05:07.058 ********* 2026-03-17 00:57:03.583163 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:03.583169 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:03.583175 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:03.583181 | orchestrator | 2026-03-17 00:57:03.583187 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-03-17 00:57:03.583193 | orchestrator | Tuesday 17 March 2026 00:56:11 +0000 (0:00:01.375) 0:05:08.433 ********* 2026-03-17 00:57:03.583200 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:57:03.583206 | orchestrator | 2026-03-17 00:57:03.583212 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-03-17 00:57:03.583218 | orchestrator | Tuesday 17 March 2026 00:56:12 +0000 (0:00:01.449) 0:05:09.883 ********* 2026-03-17 00:57:03.583228 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-17 00:57:03.583236 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-17 00:57:03.583250 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-17 00:57:03.583257 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-17 00:57:03.583264 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-17 00:57:03.583274 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-17 00:57:03.583280 | orchestrator | 2026-03-17 00:57:03.583287 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-03-17 00:57:03.583293 | orchestrator | Tuesday 17 March 2026 00:56:19 +0000 (0:00:06.616) 0:05:16.499 ********* 2026-03-17 00:57:03.583302 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-17 00:57:03.583313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-17 00:57:03.583320 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:03.583326 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-17 00:57:03.583336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-17 00:57:03.583342 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:03.583349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-17 00:57:03.583364 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-17 00:57:03.583371 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:03.583377 | orchestrator | 2026-03-17 00:57:03.583384 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-03-17 00:57:03.583390 | orchestrator | Tuesday 17 March 2026 00:56:19 +0000 (0:00:00.822) 0:05:17.322 ********* 2026-03-17 00:57:03.583396 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-17 00:57:03.583403 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-17 00:57:03.583409 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-17 00:57:03.583416 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-17 00:57:03.583422 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:03.583429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-17 00:57:03.583435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-17 00:57:03.583445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-17 00:57:03.583452 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-17 00:57:03.583464 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:03.583470 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-17 00:57:03.583476 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-17 00:57:03.583483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-17 00:57:03.583489 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-17 00:57:03.583495 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:03.583502 | orchestrator | 2026-03-17 00:57:03.583508 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-03-17 00:57:03.583514 | orchestrator | Tuesday 17 March 2026 00:56:20 +0000 (0:00:00.853) 0:05:18.176 ********* 2026-03-17 00:57:03.583520 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:57:03.583526 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:57:03.583532 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:57:03.583538 | orchestrator | 2026-03-17 00:57:03.583544 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-03-17 00:57:03.583550 | orchestrator | Tuesday 17 March 2026 00:56:22 +0000 (0:00:01.238) 0:05:19.414 ********* 2026-03-17 00:57:03.583559 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:57:03.583566 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:57:03.583572 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:57:03.583578 | orchestrator | 2026-03-17 00:57:03.583584 | orchestrator | TASK [include_role : swift] **************************************************** 2026-03-17 00:57:03.583590 | orchestrator | Tuesday 17 March 2026 00:56:24 +0000 (0:00:02.042) 0:05:21.457 ********* 2026-03-17 00:57:03.583596 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:03.583602 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:03.583609 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:03.583615 | orchestrator | 2026-03-17 00:57:03.583621 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-03-17 00:57:03.583627 | orchestrator | Tuesday 17 March 2026 00:56:24 +0000 (0:00:00.472) 0:05:21.930 ********* 2026-03-17 00:57:03.583633 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:03.583640 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:03.583646 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:03.583652 | orchestrator | 2026-03-17 00:57:03.583658 | orchestrator | TASK [include_role : trove] **************************************************** 2026-03-17 00:57:03.583664 | orchestrator | Tuesday 17 March 2026 00:56:24 +0000 (0:00:00.277) 0:05:22.207 ********* 2026-03-17 00:57:03.583671 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:03.583677 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:03.583683 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:03.583689 | orchestrator | 2026-03-17 00:57:03.583695 | orchestrator | TASK [include_role : venus] **************************************************** 2026-03-17 00:57:03.583702 | orchestrator | Tuesday 17 March 2026 00:56:25 +0000 (0:00:00.271) 0:05:22.479 ********* 2026-03-17 00:57:03.583708 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:03.583714 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:03.583720 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:03.583726 | orchestrator | 2026-03-17 00:57:03.583732 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-03-17 00:57:03.583738 | orchestrator | Tuesday 17 March 2026 00:56:25 +0000 (0:00:00.257) 0:05:22.737 ********* 2026-03-17 00:57:03.583749 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:03.583755 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:03.583761 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:03.583767 | orchestrator | 2026-03-17 00:57:03.583774 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-03-17 00:57:03.583780 | orchestrator | Tuesday 17 March 2026 00:56:25 +0000 (0:00:00.455) 0:05:23.192 ********* 2026-03-17 00:57:03.583786 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:03.583792 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:03.583798 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:03.583804 | orchestrator | 2026-03-17 00:57:03.583811 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-03-17 00:57:03.583817 | orchestrator | Tuesday 17 March 2026 00:56:26 +0000 (0:00:00.480) 0:05:23.673 ********* 2026-03-17 00:57:03.583823 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:57:03.583829 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:57:03.583836 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:57:03.583842 | orchestrator | 2026-03-17 00:57:03.583848 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-03-17 00:57:03.583854 | orchestrator | Tuesday 17 March 2026 00:56:27 +0000 (0:00:00.717) 0:05:24.391 ********* 2026-03-17 00:57:03.583860 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:57:03.583870 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:57:03.583877 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:57:03.583884 | orchestrator | 2026-03-17 00:57:03.583890 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-03-17 00:57:03.583896 | orchestrator | Tuesday 17 March 2026 00:56:27 +0000 (0:00:00.493) 0:05:24.884 ********* 2026-03-17 00:57:03.583902 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:57:03.583909 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:57:03.583915 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:57:03.583921 | orchestrator | 2026-03-17 00:57:03.583927 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-03-17 00:57:03.583933 | orchestrator | Tuesday 17 March 2026 00:56:28 +0000 (0:00:00.904) 0:05:25.789 ********* 2026-03-17 00:57:03.583939 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:57:03.583946 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:57:03.583956 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:57:03.583967 | orchestrator | 2026-03-17 00:57:03.583979 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-03-17 00:57:03.583996 | orchestrator | Tuesday 17 March 2026 00:56:29 +0000 (0:00:00.930) 0:05:26.719 ********* 2026-03-17 00:57:03.584021 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:57:03.584031 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:57:03.584041 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:57:03.584050 | orchestrator | 2026-03-17 00:57:03.584060 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-03-17 00:57:03.584070 | orchestrator | Tuesday 17 March 2026 00:56:30 +0000 (0:00:00.895) 0:05:27.615 ********* 2026-03-17 00:57:03.584079 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:57:03.584089 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:57:03.584098 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:57:03.584108 | orchestrator | 2026-03-17 00:57:03.584117 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-03-17 00:57:03.584125 | orchestrator | Tuesday 17 March 2026 00:56:34 +0000 (0:00:04.291) 0:05:31.906 ********* 2026-03-17 00:57:03.584134 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:57:03.584144 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:57:03.584153 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:57:03.584163 | orchestrator | 2026-03-17 00:57:03.584173 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-03-17 00:57:03.584184 | orchestrator | Tuesday 17 March 2026 00:56:37 +0000 (0:00:03.026) 0:05:34.932 ********* 2026-03-17 00:57:03.584195 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:57:03.584206 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:57:03.584227 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:57:03.584234 | orchestrator | 2026-03-17 00:57:03.584240 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-03-17 00:57:03.584246 | orchestrator | Tuesday 17 March 2026 00:56:45 +0000 (0:00:08.164) 0:05:43.097 ********* 2026-03-17 00:57:03.584252 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:57:03.584266 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:57:03.584273 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:57:03.584280 | orchestrator | 2026-03-17 00:57:03.584286 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-03-17 00:57:03.584292 | orchestrator | Tuesday 17 March 2026 00:56:49 +0000 (0:00:03.814) 0:05:46.912 ********* 2026-03-17 00:57:03.584298 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:57:03.584304 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:57:03.584310 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:57:03.584316 | orchestrator | 2026-03-17 00:57:03.584322 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-03-17 00:57:03.584328 | orchestrator | Tuesday 17 March 2026 00:56:58 +0000 (0:00:09.036) 0:05:55.949 ********* 2026-03-17 00:57:03.584334 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:03.584340 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:03.584346 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:03.584353 | orchestrator | 2026-03-17 00:57:03.584359 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-03-17 00:57:03.584365 | orchestrator | Tuesday 17 March 2026 00:56:59 +0000 (0:00:00.502) 0:05:56.451 ********* 2026-03-17 00:57:03.584371 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:03.584377 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:03.584383 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:03.584389 | orchestrator | 2026-03-17 00:57:03.584395 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-03-17 00:57:03.584401 | orchestrator | Tuesday 17 March 2026 00:56:59 +0000 (0:00:00.314) 0:05:56.766 ********* 2026-03-17 00:57:03.584407 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:03.584414 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:03.584420 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:03.584426 | orchestrator | 2026-03-17 00:57:03.584432 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-03-17 00:57:03.584438 | orchestrator | Tuesday 17 March 2026 00:56:59 +0000 (0:00:00.298) 0:05:57.064 ********* 2026-03-17 00:57:03.584444 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:03.584450 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:03.584456 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:03.584462 | orchestrator | 2026-03-17 00:57:03.584468 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-03-17 00:57:03.584474 | orchestrator | Tuesday 17 March 2026 00:57:00 +0000 (0:00:00.301) 0:05:57.366 ********* 2026-03-17 00:57:03.584481 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:03.584487 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:03.584493 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:03.584499 | orchestrator | 2026-03-17 00:57:03.584505 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-03-17 00:57:03.584511 | orchestrator | Tuesday 17 March 2026 00:57:00 +0000 (0:00:00.518) 0:05:57.884 ********* 2026-03-17 00:57:03.584517 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:03.584523 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:03.584529 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:03.584535 | orchestrator | 2026-03-17 00:57:03.584541 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-03-17 00:57:03.584547 | orchestrator | Tuesday 17 March 2026 00:57:00 +0000 (0:00:00.302) 0:05:58.187 ********* 2026-03-17 00:57:03.584554 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:57:03.584560 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:57:03.584566 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:57:03.584578 | orchestrator | 2026-03-17 00:57:03.584589 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-03-17 00:57:03.584595 | orchestrator | Tuesday 17 March 2026 00:57:01 +0000 (0:00:00.809) 0:05:58.997 ********* 2026-03-17 00:57:03.584602 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:57:03.584608 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:57:03.584614 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:57:03.584620 | orchestrator | 2026-03-17 00:57:03.584626 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:57:03.584632 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-17 00:57:03.584639 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-17 00:57:03.584645 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-17 00:57:03.584651 | orchestrator | 2026-03-17 00:57:03.584658 | orchestrator | 2026-03-17 00:57:03.584664 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:57:03.584670 | orchestrator | Tuesday 17 March 2026 00:57:02 +0000 (0:00:00.762) 0:05:59.759 ********* 2026-03-17 00:57:03.584676 | orchestrator | =============================================================================== 2026-03-17 00:57:03.584682 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 9.04s 2026-03-17 00:57:03.584689 | orchestrator | loadbalancer : Start backup proxysql container -------------------------- 8.17s 2026-03-17 00:57:03.584695 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 6.88s 2026-03-17 00:57:03.584701 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.62s 2026-03-17 00:57:03.584707 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 5.48s 2026-03-17 00:57:03.584713 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 4.79s 2026-03-17 00:57:03.584719 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 4.29s 2026-03-17 00:57:03.584725 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 4.22s 2026-03-17 00:57:03.584731 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 4.21s 2026-03-17 00:57:03.584741 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 4.19s 2026-03-17 00:57:03.584747 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.06s 2026-03-17 00:57:03.584753 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 3.97s 2026-03-17 00:57:03.584759 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 3.96s 2026-03-17 00:57:03.584766 | orchestrator | loadbalancer : Wait for backup proxysql to start ------------------------ 3.81s 2026-03-17 00:57:03.584772 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 3.69s 2026-03-17 00:57:03.584778 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 3.61s 2026-03-17 00:57:03.584784 | orchestrator | service-cert-copy : loadbalancer | Copying over extra CA certificates --- 3.59s 2026-03-17 00:57:03.584790 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 3.48s 2026-03-17 00:57:03.584796 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 3.47s 2026-03-17 00:57:03.584802 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 3.44s 2026-03-17 00:57:03.584808 | orchestrator | 2026-03-17 00:57:03 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:57:03.584815 | orchestrator | 2026-03-17 00:57:03 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:57:06.590878 | orchestrator | 2026-03-17 00:57:06 | INFO  | Task fece6fe6-b4a0-4d55-80c3-078950cc5047 is in state STARTED 2026-03-17 00:57:06.592834 | orchestrator | 2026-03-17 00:57:06 | INFO  | Task ecd8fbbe-bd6c-4b32-8687-052d26ebc270 is in state STARTED 2026-03-17 00:57:06.595344 | orchestrator | 2026-03-17 00:57:06 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:57:06.595587 | orchestrator | 2026-03-17 00:57:06 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:57:09.624959 | orchestrator | 2026-03-17 00:57:09 | INFO  | Task fece6fe6-b4a0-4d55-80c3-078950cc5047 is in state STARTED 2026-03-17 00:57:09.625174 | orchestrator | 2026-03-17 00:57:09 | INFO  | Task ecd8fbbe-bd6c-4b32-8687-052d26ebc270 is in state STARTED 2026-03-17 00:57:09.626595 | orchestrator | 2026-03-17 00:57:09 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:57:09.626657 | orchestrator | 2026-03-17 00:57:09 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:57:12.662304 | orchestrator | 2026-03-17 00:57:12 | INFO  | Task fece6fe6-b4a0-4d55-80c3-078950cc5047 is in state STARTED 2026-03-17 00:57:12.665644 | orchestrator | 2026-03-17 00:57:12 | INFO  | Task ecd8fbbe-bd6c-4b32-8687-052d26ebc270 is in state STARTED 2026-03-17 00:57:12.666230 | orchestrator | 2026-03-17 00:57:12 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:57:12.666300 | orchestrator | 2026-03-17 00:57:12 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:57:15.718777 | orchestrator | 2026-03-17 00:57:15 | INFO  | Task fece6fe6-b4a0-4d55-80c3-078950cc5047 is in state STARTED 2026-03-17 00:57:15.719668 | orchestrator | 2026-03-17 00:57:15 | INFO  | Task ecd8fbbe-bd6c-4b32-8687-052d26ebc270 is in state STARTED 2026-03-17 00:57:15.720332 | orchestrator | 2026-03-17 00:57:15 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:57:15.720364 | orchestrator | 2026-03-17 00:57:15 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:57:18.752817 | orchestrator | 2026-03-17 00:57:18 | INFO  | Task fece6fe6-b4a0-4d55-80c3-078950cc5047 is in state STARTED 2026-03-17 00:57:18.754656 | orchestrator | 2026-03-17 00:57:18 | INFO  | Task ecd8fbbe-bd6c-4b32-8687-052d26ebc270 is in state STARTED 2026-03-17 00:57:18.757414 | orchestrator | 2026-03-17 00:57:18 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:57:18.757604 | orchestrator | 2026-03-17 00:57:18 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:57:21.783146 | orchestrator | 2026-03-17 00:57:21 | INFO  | Task fece6fe6-b4a0-4d55-80c3-078950cc5047 is in state STARTED 2026-03-17 00:57:21.785796 | orchestrator | 2026-03-17 00:57:21 | INFO  | Task ecd8fbbe-bd6c-4b32-8687-052d26ebc270 is in state STARTED 2026-03-17 00:57:21.786403 | orchestrator | 2026-03-17 00:57:21 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:57:21.786456 | orchestrator | 2026-03-17 00:57:21 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:57:24.830262 | orchestrator | 2026-03-17 00:57:24 | INFO  | Task fece6fe6-b4a0-4d55-80c3-078950cc5047 is in state STARTED 2026-03-17 00:57:24.831760 | orchestrator | 2026-03-17 00:57:24 | INFO  | Task ecd8fbbe-bd6c-4b32-8687-052d26ebc270 is in state STARTED 2026-03-17 00:57:24.832694 | orchestrator | 2026-03-17 00:57:24 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:57:24.832887 | orchestrator | 2026-03-17 00:57:24 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:57:27.864918 | orchestrator | 2026-03-17 00:57:27 | INFO  | Task fece6fe6-b4a0-4d55-80c3-078950cc5047 is in state STARTED 2026-03-17 00:57:27.867274 | orchestrator | 2026-03-17 00:57:27 | INFO  | Task ecd8fbbe-bd6c-4b32-8687-052d26ebc270 is in state STARTED 2026-03-17 00:57:27.868470 | orchestrator | 2026-03-17 00:57:27 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:57:27.868839 | orchestrator | 2026-03-17 00:57:27 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:57:30.901892 | orchestrator | 2026-03-17 00:57:30 | INFO  | Task fece6fe6-b4a0-4d55-80c3-078950cc5047 is in state STARTED 2026-03-17 00:57:30.902286 | orchestrator | 2026-03-17 00:57:30 | INFO  | Task ecd8fbbe-bd6c-4b32-8687-052d26ebc270 is in state STARTED 2026-03-17 00:57:30.903163 | orchestrator | 2026-03-17 00:57:30 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:57:30.903198 | orchestrator | 2026-03-17 00:57:30 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:57:33.944565 | orchestrator | 2026-03-17 00:57:33 | INFO  | Task fece6fe6-b4a0-4d55-80c3-078950cc5047 is in state STARTED 2026-03-17 00:57:33.947404 | orchestrator | 2026-03-17 00:57:33 | INFO  | Task ecd8fbbe-bd6c-4b32-8687-052d26ebc270 is in state STARTED 2026-03-17 00:57:33.951280 | orchestrator | 2026-03-17 00:57:33 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:57:33.951337 | orchestrator | 2026-03-17 00:57:33 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:57:36.996940 | orchestrator | 2026-03-17 00:57:36 | INFO  | Task fece6fe6-b4a0-4d55-80c3-078950cc5047 is in state STARTED 2026-03-17 00:57:36.998060 | orchestrator | 2026-03-17 00:57:36 | INFO  | Task ecd8fbbe-bd6c-4b32-8687-052d26ebc270 is in state STARTED 2026-03-17 00:57:37.001280 | orchestrator | 2026-03-17 00:57:36 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:57:37.002244 | orchestrator | 2026-03-17 00:57:37 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:57:40.083147 | orchestrator | 2026-03-17 00:57:40 | INFO  | Task fece6fe6-b4a0-4d55-80c3-078950cc5047 is in state STARTED 2026-03-17 00:57:40.086415 | orchestrator | 2026-03-17 00:57:40 | INFO  | Task ecd8fbbe-bd6c-4b32-8687-052d26ebc270 is in state STARTED 2026-03-17 00:57:40.088384 | orchestrator | 2026-03-17 00:57:40 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:57:40.088419 | orchestrator | 2026-03-17 00:57:40 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:57:43.124525 | orchestrator | 2026-03-17 00:57:43 | INFO  | Task fece6fe6-b4a0-4d55-80c3-078950cc5047 is in state STARTED 2026-03-17 00:57:43.128440 | orchestrator | 2026-03-17 00:57:43 | INFO  | Task ecd8fbbe-bd6c-4b32-8687-052d26ebc270 is in state STARTED 2026-03-17 00:57:43.130413 | orchestrator | 2026-03-17 00:57:43 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:57:43.131484 | orchestrator | 2026-03-17 00:57:43 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:57:46.170748 | orchestrator | 2026-03-17 00:57:46 | INFO  | Task fece6fe6-b4a0-4d55-80c3-078950cc5047 is in state STARTED 2026-03-17 00:57:46.171498 | orchestrator | 2026-03-17 00:57:46 | INFO  | Task ecd8fbbe-bd6c-4b32-8687-052d26ebc270 is in state STARTED 2026-03-17 00:57:46.171561 | orchestrator | 2026-03-17 00:57:46 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:57:46.171571 | orchestrator | 2026-03-17 00:57:46 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:57:49.212429 | orchestrator | 2026-03-17 00:57:49 | INFO  | Task fece6fe6-b4a0-4d55-80c3-078950cc5047 is in state STARTED 2026-03-17 00:57:49.215765 | orchestrator | 2026-03-17 00:57:49 | INFO  | Task ecd8fbbe-bd6c-4b32-8687-052d26ebc270 is in state STARTED 2026-03-17 00:57:49.219143 | orchestrator | 2026-03-17 00:57:49 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:57:49.219198 | orchestrator | 2026-03-17 00:57:49 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:57:52.275087 | orchestrator | 2026-03-17 00:57:52 | INFO  | Task fece6fe6-b4a0-4d55-80c3-078950cc5047 is in state STARTED 2026-03-17 00:57:52.277309 | orchestrator | 2026-03-17 00:57:52 | INFO  | Task ecd8fbbe-bd6c-4b32-8687-052d26ebc270 is in state STARTED 2026-03-17 00:57:52.279886 | orchestrator | 2026-03-17 00:57:52 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:57:52.279958 | orchestrator | 2026-03-17 00:57:52 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:57:55.308885 | orchestrator | 2026-03-17 00:57:55 | INFO  | Task fece6fe6-b4a0-4d55-80c3-078950cc5047 is in state STARTED 2026-03-17 00:57:55.312775 | orchestrator | 2026-03-17 00:57:55 | INFO  | Task ecd8fbbe-bd6c-4b32-8687-052d26ebc270 is in state STARTED 2026-03-17 00:57:55.314345 | orchestrator | 2026-03-17 00:57:55 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:57:55.314464 | orchestrator | 2026-03-17 00:57:55 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:57:58.356955 | orchestrator | 2026-03-17 00:57:58 | INFO  | Task fece6fe6-b4a0-4d55-80c3-078950cc5047 is in state STARTED 2026-03-17 00:57:58.358709 | orchestrator | 2026-03-17 00:57:58 | INFO  | Task ecd8fbbe-bd6c-4b32-8687-052d26ebc270 is in state STARTED 2026-03-17 00:57:58.361931 | orchestrator | 2026-03-17 00:57:58 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:57:58.362008 | orchestrator | 2026-03-17 00:57:58 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:58:01.404742 | orchestrator | 2026-03-17 00:58:01 | INFO  | Task fece6fe6-b4a0-4d55-80c3-078950cc5047 is in state STARTED 2026-03-17 00:58:01.407842 | orchestrator | 2026-03-17 00:58:01 | INFO  | Task ecd8fbbe-bd6c-4b32-8687-052d26ebc270 is in state STARTED 2026-03-17 00:58:01.410608 | orchestrator | 2026-03-17 00:58:01 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:58:01.411277 | orchestrator | 2026-03-17 00:58:01 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:58:04.458152 | orchestrator | 2026-03-17 00:58:04 | INFO  | Task fece6fe6-b4a0-4d55-80c3-078950cc5047 is in state STARTED 2026-03-17 00:58:04.461838 | orchestrator | 2026-03-17 00:58:04 | INFO  | Task ecd8fbbe-bd6c-4b32-8687-052d26ebc270 is in state STARTED 2026-03-17 00:58:04.464250 | orchestrator | 2026-03-17 00:58:04 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:58:04.464830 | orchestrator | 2026-03-17 00:58:04 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:58:07.517320 | orchestrator | 2026-03-17 00:58:07 | INFO  | Task fece6fe6-b4a0-4d55-80c3-078950cc5047 is in state STARTED 2026-03-17 00:58:07.519371 | orchestrator | 2026-03-17 00:58:07 | INFO  | Task ecd8fbbe-bd6c-4b32-8687-052d26ebc270 is in state STARTED 2026-03-17 00:58:07.521355 | orchestrator | 2026-03-17 00:58:07 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:58:07.521408 | orchestrator | 2026-03-17 00:58:07 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:58:10.570276 | orchestrator | 2026-03-17 00:58:10 | INFO  | Task fece6fe6-b4a0-4d55-80c3-078950cc5047 is in state STARTED 2026-03-17 00:58:10.572275 | orchestrator | 2026-03-17 00:58:10 | INFO  | Task ecd8fbbe-bd6c-4b32-8687-052d26ebc270 is in state STARTED 2026-03-17 00:58:10.573817 | orchestrator | 2026-03-17 00:58:10 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:58:10.574089 | orchestrator | 2026-03-17 00:58:10 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:58:13.607514 | orchestrator | 2026-03-17 00:58:13 | INFO  | Task fece6fe6-b4a0-4d55-80c3-078950cc5047 is in state STARTED 2026-03-17 00:58:13.610118 | orchestrator | 2026-03-17 00:58:13 | INFO  | Task ecd8fbbe-bd6c-4b32-8687-052d26ebc270 is in state STARTED 2026-03-17 00:58:13.611681 | orchestrator | 2026-03-17 00:58:13 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:58:13.611781 | orchestrator | 2026-03-17 00:58:13 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:58:16.653885 | orchestrator | 2026-03-17 00:58:16 | INFO  | Task fece6fe6-b4a0-4d55-80c3-078950cc5047 is in state STARTED 2026-03-17 00:58:16.654527 | orchestrator | 2026-03-17 00:58:16 | INFO  | Task ecd8fbbe-bd6c-4b32-8687-052d26ebc270 is in state STARTED 2026-03-17 00:58:16.655099 | orchestrator | 2026-03-17 00:58:16 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:58:16.655118 | orchestrator | 2026-03-17 00:58:16 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:58:19.709061 | orchestrator | 2026-03-17 00:58:19 | INFO  | Task fece6fe6-b4a0-4d55-80c3-078950cc5047 is in state STARTED 2026-03-17 00:58:19.710928 | orchestrator | 2026-03-17 00:58:19 | INFO  | Task ecd8fbbe-bd6c-4b32-8687-052d26ebc270 is in state STARTED 2026-03-17 00:58:19.712969 | orchestrator | 2026-03-17 00:58:19 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:58:19.713085 | orchestrator | 2026-03-17 00:58:19 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:58:22.760836 | orchestrator | 2026-03-17 00:58:22 | INFO  | Task fece6fe6-b4a0-4d55-80c3-078950cc5047 is in state STARTED 2026-03-17 00:58:22.762332 | orchestrator | 2026-03-17 00:58:22 | INFO  | Task ecd8fbbe-bd6c-4b32-8687-052d26ebc270 is in state STARTED 2026-03-17 00:58:22.764271 | orchestrator | 2026-03-17 00:58:22 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:58:22.764322 | orchestrator | 2026-03-17 00:58:22 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:58:25.810313 | orchestrator | 2026-03-17 00:58:25 | INFO  | Task fece6fe6-b4a0-4d55-80c3-078950cc5047 is in state STARTED 2026-03-17 00:58:25.812446 | orchestrator | 2026-03-17 00:58:25 | INFO  | Task ecd8fbbe-bd6c-4b32-8687-052d26ebc270 is in state STARTED 2026-03-17 00:58:25.815162 | orchestrator | 2026-03-17 00:58:25 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:58:25.815204 | orchestrator | 2026-03-17 00:58:25 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:58:28.860696 | orchestrator | 2026-03-17 00:58:28 | INFO  | Task fece6fe6-b4a0-4d55-80c3-078950cc5047 is in state STARTED 2026-03-17 00:58:28.863142 | orchestrator | 2026-03-17 00:58:28 | INFO  | Task ecd8fbbe-bd6c-4b32-8687-052d26ebc270 is in state STARTED 2026-03-17 00:58:28.865756 | orchestrator | 2026-03-17 00:58:28 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:58:28.865807 | orchestrator | 2026-03-17 00:58:28 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:58:31.910165 | orchestrator | 2026-03-17 00:58:31 | INFO  | Task fece6fe6-b4a0-4d55-80c3-078950cc5047 is in state STARTED 2026-03-17 00:58:31.912462 | orchestrator | 2026-03-17 00:58:31 | INFO  | Task ecd8fbbe-bd6c-4b32-8687-052d26ebc270 is in state STARTED 2026-03-17 00:58:31.915653 | orchestrator | 2026-03-17 00:58:31 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:58:31.915699 | orchestrator | 2026-03-17 00:58:31 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:58:34.951657 | orchestrator | 2026-03-17 00:58:34 | INFO  | Task fece6fe6-b4a0-4d55-80c3-078950cc5047 is in state STARTED 2026-03-17 00:58:34.953453 | orchestrator | 2026-03-17 00:58:34 | INFO  | Task ecd8fbbe-bd6c-4b32-8687-052d26ebc270 is in state STARTED 2026-03-17 00:58:34.955358 | orchestrator | 2026-03-17 00:58:34 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:58:34.955598 | orchestrator | 2026-03-17 00:58:34 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:58:37.995495 | orchestrator | 2026-03-17 00:58:37 | INFO  | Task fece6fe6-b4a0-4d55-80c3-078950cc5047 is in state STARTED 2026-03-17 00:58:37.997520 | orchestrator | 2026-03-17 00:58:37 | INFO  | Task ecd8fbbe-bd6c-4b32-8687-052d26ebc270 is in state STARTED 2026-03-17 00:58:38.000040 | orchestrator | 2026-03-17 00:58:37 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:58:38.000090 | orchestrator | 2026-03-17 00:58:37 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:58:41.040442 | orchestrator | 2026-03-17 00:58:41 | INFO  | Task fece6fe6-b4a0-4d55-80c3-078950cc5047 is in state STARTED 2026-03-17 00:58:41.042361 | orchestrator | 2026-03-17 00:58:41 | INFO  | Task ecd8fbbe-bd6c-4b32-8687-052d26ebc270 is in state STARTED 2026-03-17 00:58:41.043843 | orchestrator | 2026-03-17 00:58:41 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:58:41.043876 | orchestrator | 2026-03-17 00:58:41 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:58:44.082314 | orchestrator | 2026-03-17 00:58:44 | INFO  | Task fece6fe6-b4a0-4d55-80c3-078950cc5047 is in state STARTED 2026-03-17 00:58:44.083777 | orchestrator | 2026-03-17 00:58:44 | INFO  | Task ecd8fbbe-bd6c-4b32-8687-052d26ebc270 is in state STARTED 2026-03-17 00:58:44.085814 | orchestrator | 2026-03-17 00:58:44 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:58:44.085892 | orchestrator | 2026-03-17 00:58:44 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:58:47.125526 | orchestrator | 2026-03-17 00:58:47 | INFO  | Task fece6fe6-b4a0-4d55-80c3-078950cc5047 is in state STARTED 2026-03-17 00:58:47.129339 | orchestrator | 2026-03-17 00:58:47 | INFO  | Task ecd8fbbe-bd6c-4b32-8687-052d26ebc270 is in state STARTED 2026-03-17 00:58:47.129415 | orchestrator | 2026-03-17 00:58:47 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:58:47.129727 | orchestrator | 2026-03-17 00:58:47 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:58:50.177212 | orchestrator | 2026-03-17 00:58:50 | INFO  | Task fece6fe6-b4a0-4d55-80c3-078950cc5047 is in state STARTED 2026-03-17 00:58:50.179115 | orchestrator | 2026-03-17 00:58:50 | INFO  | Task ecd8fbbe-bd6c-4b32-8687-052d26ebc270 is in state STARTED 2026-03-17 00:58:50.180546 | orchestrator | 2026-03-17 00:58:50 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:58:50.180732 | orchestrator | 2026-03-17 00:58:50 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:58:53.230988 | orchestrator | 2026-03-17 00:58:53 | INFO  | Task fece6fe6-b4a0-4d55-80c3-078950cc5047 is in state STARTED 2026-03-17 00:58:53.234307 | orchestrator | 2026-03-17 00:58:53 | INFO  | Task ecd8fbbe-bd6c-4b32-8687-052d26ebc270 is in state STARTED 2026-03-17 00:58:53.236485 | orchestrator | 2026-03-17 00:58:53 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:58:53.236824 | orchestrator | 2026-03-17 00:58:53 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:58:56.290475 | orchestrator | 2026-03-17 00:58:56 | INFO  | Task fece6fe6-b4a0-4d55-80c3-078950cc5047 is in state STARTED 2026-03-17 00:58:56.292761 | orchestrator | 2026-03-17 00:58:56 | INFO  | Task ecd8fbbe-bd6c-4b32-8687-052d26ebc270 is in state STARTED 2026-03-17 00:58:56.294802 | orchestrator | 2026-03-17 00:58:56 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:58:56.294856 | orchestrator | 2026-03-17 00:58:56 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:58:59.338195 | orchestrator | 2026-03-17 00:58:59 | INFO  | Task fece6fe6-b4a0-4d55-80c3-078950cc5047 is in state STARTED 2026-03-17 00:58:59.340758 | orchestrator | 2026-03-17 00:58:59 | INFO  | Task ecd8fbbe-bd6c-4b32-8687-052d26ebc270 is in state STARTED 2026-03-17 00:58:59.344222 | orchestrator | 2026-03-17 00:58:59 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:58:59.344266 | orchestrator | 2026-03-17 00:58:59 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:59:02.381769 | orchestrator | 2026-03-17 00:59:02 | INFO  | Task fece6fe6-b4a0-4d55-80c3-078950cc5047 is in state STARTED 2026-03-17 00:59:02.383971 | orchestrator | 2026-03-17 00:59:02 | INFO  | Task ecd8fbbe-bd6c-4b32-8687-052d26ebc270 is in state STARTED 2026-03-17 00:59:02.384959 | orchestrator | 2026-03-17 00:59:02 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:59:02.385095 | orchestrator | 2026-03-17 00:59:02 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:59:05.424519 | orchestrator | 2026-03-17 00:59:05 | INFO  | Task fece6fe6-b4a0-4d55-80c3-078950cc5047 is in state STARTED 2026-03-17 00:59:05.426382 | orchestrator | 2026-03-17 00:59:05 | INFO  | Task ecd8fbbe-bd6c-4b32-8687-052d26ebc270 is in state STARTED 2026-03-17 00:59:05.428415 | orchestrator | 2026-03-17 00:59:05 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:59:05.428451 | orchestrator | 2026-03-17 00:59:05 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:59:08.462669 | orchestrator | 2026-03-17 00:59:08 | INFO  | Task fece6fe6-b4a0-4d55-80c3-078950cc5047 is in state STARTED 2026-03-17 00:59:08.464569 | orchestrator | 2026-03-17 00:59:08 | INFO  | Task ecd8fbbe-bd6c-4b32-8687-052d26ebc270 is in state STARTED 2026-03-17 00:59:08.466237 | orchestrator | 2026-03-17 00:59:08 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state STARTED 2026-03-17 00:59:08.466385 | orchestrator | 2026-03-17 00:59:08 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:59:11.516537 | orchestrator | 2026-03-17 00:59:11 | INFO  | Task fece6fe6-b4a0-4d55-80c3-078950cc5047 is in state STARTED 2026-03-17 00:59:11.519051 | orchestrator | 2026-03-17 00:59:11 | INFO  | Task ecd8fbbe-bd6c-4b32-8687-052d26ebc270 is in state STARTED 2026-03-17 00:59:11.521892 | orchestrator | 2026-03-17 00:59:11 | INFO  | Task a29c3e99-e817-4f42-bd0d-49b5c77ba5ae is in state STARTED 2026-03-17 00:59:11.529383 | orchestrator | 2026-03-17 00:59:11 | INFO  | Task 22eef708-a8e4-4e89-abd9-ab92803db6aa is in state SUCCESS 2026-03-17 00:59:11.531187 | orchestrator | 2026-03-17 00:59:11.531250 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-17 00:59:11.531260 | orchestrator | 2.16.14 2026-03-17 00:59:11.531267 | orchestrator | 2026-03-17 00:59:11.531274 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-03-17 00:59:11.531282 | orchestrator | 2026-03-17 00:59:11.531288 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-17 00:59:11.531293 | orchestrator | Tuesday 17 March 2026 00:48:40 +0000 (0:00:00.825) 0:00:00.825 ********* 2026-03-17 00:59:11.531301 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:59:11.531325 | orchestrator | 2026-03-17 00:59:11.531332 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-17 00:59:11.531338 | orchestrator | Tuesday 17 March 2026 00:48:42 +0000 (0:00:01.504) 0:00:02.330 ********* 2026-03-17 00:59:11.531344 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:59:11.531350 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:59:11.531356 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:59:11.531362 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:59:11.531368 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:59:11.531374 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:59:11.531380 | orchestrator | 2026-03-17 00:59:11.531387 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-17 00:59:11.531393 | orchestrator | Tuesday 17 March 2026 00:48:43 +0000 (0:00:01.693) 0:00:04.023 ********* 2026-03-17 00:59:11.531400 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:59:11.531406 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:59:11.531412 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:59:11.531418 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:59:11.531424 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:59:11.531429 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:59:11.531436 | orchestrator | 2026-03-17 00:59:11.531442 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-17 00:59:11.531448 | orchestrator | Tuesday 17 March 2026 00:48:44 +0000 (0:00:00.886) 0:00:04.910 ********* 2026-03-17 00:59:11.531454 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:59:11.531460 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:59:11.531465 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:59:11.531471 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:59:11.531477 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:59:11.531483 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:59:11.531489 | orchestrator | 2026-03-17 00:59:11.531496 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-17 00:59:11.531502 | orchestrator | Tuesday 17 March 2026 00:48:46 +0000 (0:00:01.850) 0:00:06.760 ********* 2026-03-17 00:59:11.531508 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:59:11.531514 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:59:11.531520 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:59:11.531526 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:59:11.531532 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:59:11.531539 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:59:11.531546 | orchestrator | 2026-03-17 00:59:11.531552 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-17 00:59:11.531558 | orchestrator | Tuesday 17 March 2026 00:48:48 +0000 (0:00:01.436) 0:00:08.196 ********* 2026-03-17 00:59:11.531564 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:59:11.531570 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:59:11.531576 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:59:11.531582 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:59:11.531588 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:59:11.531594 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:59:11.531600 | orchestrator | 2026-03-17 00:59:11.531606 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-17 00:59:11.532062 | orchestrator | Tuesday 17 March 2026 00:48:49 +0000 (0:00:01.044) 0:00:09.241 ********* 2026-03-17 00:59:11.532082 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:59:11.532090 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:59:11.532107 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:59:11.532113 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:59:11.532119 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:59:11.532126 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:59:11.532132 | orchestrator | 2026-03-17 00:59:11.532139 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-17 00:59:11.532144 | orchestrator | Tuesday 17 March 2026 00:48:50 +0000 (0:00:00.805) 0:00:10.046 ********* 2026-03-17 00:59:11.532157 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.532161 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.532165 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.532169 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.532173 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.532176 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.532180 | orchestrator | 2026-03-17 00:59:11.532184 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-17 00:59:11.532189 | orchestrator | Tuesday 17 March 2026 00:48:51 +0000 (0:00:01.281) 0:00:11.328 ********* 2026-03-17 00:59:11.532196 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:59:11.532203 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:59:11.532212 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:59:11.532218 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:59:11.532224 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:59:11.532230 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:59:11.532235 | orchestrator | 2026-03-17 00:59:11.532253 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-17 00:59:11.532260 | orchestrator | Tuesday 17 March 2026 00:48:52 +0000 (0:00:01.594) 0:00:12.923 ********* 2026-03-17 00:59:11.532267 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-17 00:59:11.532273 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-17 00:59:11.532279 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-17 00:59:11.532286 | orchestrator | 2026-03-17 00:59:11.532292 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-17 00:59:11.532299 | orchestrator | Tuesday 17 March 2026 00:48:53 +0000 (0:00:00.661) 0:00:13.584 ********* 2026-03-17 00:59:11.532305 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:59:11.532416 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:59:11.532503 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:59:11.532525 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:59:11.532531 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:59:11.532536 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:59:11.532542 | orchestrator | 2026-03-17 00:59:11.532548 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-17 00:59:11.532554 | orchestrator | Tuesday 17 March 2026 00:48:54 +0000 (0:00:01.338) 0:00:14.923 ********* 2026-03-17 00:59:11.532559 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-17 00:59:11.532565 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-17 00:59:11.532571 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-17 00:59:11.532577 | orchestrator | 2026-03-17 00:59:11.532583 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-17 00:59:11.532589 | orchestrator | Tuesday 17 March 2026 00:48:58 +0000 (0:00:03.412) 0:00:18.335 ********* 2026-03-17 00:59:11.532595 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-17 00:59:11.532602 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-17 00:59:11.532608 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-17 00:59:11.532617 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.532623 | orchestrator | 2026-03-17 00:59:11.532630 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-17 00:59:11.532705 | orchestrator | Tuesday 17 March 2026 00:48:59 +0000 (0:00:00.822) 0:00:19.158 ********* 2026-03-17 00:59:11.532715 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-17 00:59:11.532723 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-17 00:59:11.532738 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-17 00:59:11.532744 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.532750 | orchestrator | 2026-03-17 00:59:11.532757 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-17 00:59:11.532763 | orchestrator | Tuesday 17 March 2026 00:49:01 +0000 (0:00:01.982) 0:00:21.140 ********* 2026-03-17 00:59:11.532776 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-17 00:59:11.532797 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-17 00:59:11.532806 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-17 00:59:11.532813 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.532819 | orchestrator | 2026-03-17 00:59:11.532825 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-17 00:59:11.532831 | orchestrator | Tuesday 17 March 2026 00:49:01 +0000 (0:00:00.414) 0:00:21.555 ********* 2026-03-17 00:59:11.532848 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-17 00:48:55.896635', 'end': '2026-03-17 00:48:56.003729', 'delta': '0:00:00.107094', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-17 00:59:11.532857 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-17 00:48:56.681348', 'end': '2026-03-17 00:48:56.780348', 'delta': '0:00:00.099000', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-17 00:59:11.532864 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-17 00:48:57.995921', 'end': '2026-03-17 00:48:58.094433', 'delta': '0:00:00.098512', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-17 00:59:11.532879 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.532889 | orchestrator | 2026-03-17 00:59:11.532897 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-17 00:59:11.532903 | orchestrator | Tuesday 17 March 2026 00:49:02 +0000 (0:00:00.651) 0:00:22.207 ********* 2026-03-17 00:59:11.532920 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:59:11.533567 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:59:11.533592 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:59:11.533599 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:59:11.533605 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:59:11.533611 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:59:11.533617 | orchestrator | 2026-03-17 00:59:11.533624 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-17 00:59:11.533752 | orchestrator | Tuesday 17 March 2026 00:49:03 +0000 (0:00:01.820) 0:00:24.028 ********* 2026-03-17 00:59:11.533764 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-17 00:59:11.533772 | orchestrator | 2026-03-17 00:59:11.533778 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-17 00:59:11.533785 | orchestrator | Tuesday 17 March 2026 00:49:04 +0000 (0:00:00.693) 0:00:24.721 ********* 2026-03-17 00:59:11.533792 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.533801 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.533807 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.533818 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.533955 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.533967 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.533974 | orchestrator | 2026-03-17 00:59:11.533979 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-17 00:59:11.533986 | orchestrator | Tuesday 17 March 2026 00:49:05 +0000 (0:00:00.942) 0:00:25.664 ********* 2026-03-17 00:59:11.533992 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.533999 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.534005 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.534012 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.534055 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.534061 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.534068 | orchestrator | 2026-03-17 00:59:11.534074 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-17 00:59:11.534081 | orchestrator | Tuesday 17 March 2026 00:49:06 +0000 (0:00:00.922) 0:00:26.586 ********* 2026-03-17 00:59:11.534088 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.534095 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.534101 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.534108 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.534115 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.534121 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.534127 | orchestrator | 2026-03-17 00:59:11.534134 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-17 00:59:11.534275 | orchestrator | Tuesday 17 March 2026 00:49:07 +0000 (0:00:00.836) 0:00:27.423 ********* 2026-03-17 00:59:11.534281 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.534285 | orchestrator | 2026-03-17 00:59:11.534288 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-17 00:59:11.534292 | orchestrator | Tuesday 17 March 2026 00:49:07 +0000 (0:00:00.077) 0:00:27.500 ********* 2026-03-17 00:59:11.534303 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.534306 | orchestrator | 2026-03-17 00:59:11.534310 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-17 00:59:11.534314 | orchestrator | Tuesday 17 March 2026 00:49:07 +0000 (0:00:00.157) 0:00:27.658 ********* 2026-03-17 00:59:11.534318 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.534321 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.534325 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.534351 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.534362 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.534427 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.534435 | orchestrator | 2026-03-17 00:59:11.534441 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-17 00:59:11.534448 | orchestrator | Tuesday 17 March 2026 00:49:08 +0000 (0:00:00.830) 0:00:28.489 ********* 2026-03-17 00:59:11.534454 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.534460 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.534466 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.534472 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.534479 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.534485 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.534544 | orchestrator | 2026-03-17 00:59:11.534555 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-17 00:59:11.534562 | orchestrator | Tuesday 17 March 2026 00:49:09 +0000 (0:00:00.823) 0:00:29.313 ********* 2026-03-17 00:59:11.534568 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.534575 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.534581 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.534641 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.534648 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.534654 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.534660 | orchestrator | 2026-03-17 00:59:11.534665 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-17 00:59:11.534671 | orchestrator | Tuesday 17 March 2026 00:49:09 +0000 (0:00:00.633) 0:00:29.947 ********* 2026-03-17 00:59:11.534678 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.534684 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.534691 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.534697 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.534702 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.534708 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.534714 | orchestrator | 2026-03-17 00:59:11.534721 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-17 00:59:11.534727 | orchestrator | Tuesday 17 March 2026 00:49:11 +0000 (0:00:01.106) 0:00:31.053 ********* 2026-03-17 00:59:11.534767 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.534773 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.534780 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.534785 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.534791 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.534798 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.534804 | orchestrator | 2026-03-17 00:59:11.534810 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-17 00:59:11.534817 | orchestrator | Tuesday 17 March 2026 00:49:11 +0000 (0:00:00.824) 0:00:31.877 ********* 2026-03-17 00:59:11.534823 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.534829 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.534835 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.534841 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.534847 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.534853 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.534860 | orchestrator | 2026-03-17 00:59:11.534874 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-17 00:59:11.534881 | orchestrator | Tuesday 17 March 2026 00:49:12 +0000 (0:00:01.145) 0:00:33.023 ********* 2026-03-17 00:59:11.534887 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.534893 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.534899 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.534917 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.534924 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.534929 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.534934 | orchestrator | 2026-03-17 00:59:11.534945 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-17 00:59:11.534950 | orchestrator | Tuesday 17 March 2026 00:49:13 +0000 (0:00:00.721) 0:00:33.744 ********* 2026-03-17 00:59:11.534958 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--45fdc78c--b598--5156--b36d--ba4cd7c12386-osd--block--45fdc78c--b598--5156--b36d--ba4cd7c12386', 'dm-uuid-LVM-6timofDkKT1hbgs1UiLHgm8I9lC3wjGeUFlOGZuZIlCxkSeT3VIDJBOooO84jJ4W'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-17 00:59:11.534965 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2b5d6da3--626f--5c09--a421--20ac1510e3d2-osd--block--2b5d6da3--626f--5c09--a421--20ac1510e3d2', 'dm-uuid-LVM-1aQ8jNKmNVPuSUkhlXwYUGiDvucOc05Mj89XXHB4DeQUyYPBdPYze9NHoCjcBTMD'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-17 00:59:11.534998 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 00:59:11.535009 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 00:59:11.535016 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 00:59:11.535022 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 00:59:11.535028 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 00:59:11.535041 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 00:59:11.535051 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 00:59:11.535060 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 00:59:11.535090 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fef12aab-9308-4371-8ae3-fd48e064f393', 'scsi-SQEMU_QEMU_HARDDISK_fef12aab-9308-4371-8ae3-fd48e064f393'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fef12aab-9308-4371-8ae3-fd48e064f393-part1', 'scsi-SQEMU_QEMU_HARDDISK_fef12aab-9308-4371-8ae3-fd48e064f393-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fef12aab-9308-4371-8ae3-fd48e064f393-part14', 'scsi-SQEMU_QEMU_HARDDISK_fef12aab-9308-4371-8ae3-fd48e064f393-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fef12aab-9308-4371-8ae3-fd48e064f393-part15', 'scsi-SQEMU_QEMU_HARDDISK_fef12aab-9308-4371-8ae3-fd48e064f393-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fef12aab-9308-4371-8ae3-fd48e064f393-part16', 'scsi-SQEMU_QEMU_HARDDISK_fef12aab-9308-4371-8ae3-fd48e064f393-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 00:59:11.535100 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--45fdc78c--b598--5156--b36d--ba4cd7c12386-osd--block--45fdc78c--b598--5156--b36d--ba4cd7c12386'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-5NLxCA-3OmV-UzBj-h29u-hGxB-8QDS-1x2KeN', 'scsi-0QEMU_QEMU_HARDDISK_f65971dd-3d8e-4ccb-8892-9cef1457b08b', 'scsi-SQEMU_QEMU_HARDDISK_f65971dd-3d8e-4ccb-8892-9cef1457b08b'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 00:59:11.535117 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--2b5d6da3--626f--5c09--a421--20ac1510e3d2-osd--block--2b5d6da3--626f--5c09--a421--20ac1510e3d2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Cfj8Wt-Pc6p-KnzT-OxB4-bn7U-Wz17-huOjFT', 'scsi-0QEMU_QEMU_HARDDISK_8140ca94-7747-4c81-b89b-0d83b2f23451', 'scsi-SQEMU_QEMU_HARDDISK_8140ca94-7747-4c81-b89b-0d83b2f23451'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 00:59:11.535124 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1afbae95-f964-4c90-9c71-9e7629ff9c63', 'scsi-SQEMU_QEMU_HARDDISK_1afbae95-f964-4c90-9c71-9e7629ff9c63'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 00:59:11.535131 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-17-00-03-36-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 00:59:11.535137 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.535158 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--dc88f193--a403--571c--9716--867079cb0a77-osd--block--dc88f193--a403--571c--9716--867079cb0a77', 'dm-uuid-LVM-6cbvHb49d19dye5SGAJdR4tSnVL1sn3e3VYpoopj1ggoGa3fsfEBFRYtjRY42Zwu'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-17 00:59:11.535214 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9e905ad0--9805--5328--aec5--92944dddbd57-osd--block--9e905ad0--9805--5328--aec5--92944dddbd57', 'dm-uuid-LVM-KRraYFqQ9BlELNTrII6HMgV69ppufHP8w3fspEp9JIWjJz6vi7Do1Q3YTQrhsZKv'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-17 00:59:11.535232 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 00:59:11.535244 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 00:59:11.535250 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3c41c00e--01b2--5de9--9d7e--31888b7f9771-osd--block--3c41c00e--01b2--5de9--9d7e--31888b7f9771', 'dm-uuid-LVM-ysKBsJ06fJzf8zFG7udyrvhcFSNLKkXdy6bdG43Y2KDBxreI5nZ2cT5mbzpD8z3f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-17 00:59:11.535260 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 00:59:11.535438 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b1b21aa2--16de--5cd3--9497--37bc0f66c5a5-osd--block--b1b21aa2--16de--5cd3--9497--37bc0f66c5a5', 'dm-uuid-LVM-iBbCl8l2Cp21TLsab0LZb9UpEOZERcnCml6xqtK2mvkLdOPZUie3k4LRW6n3GT3Z'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-17 00:59:11.535449 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 00:59:11.535519 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 00:59:11.535532 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 00:59:11.535541 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 00:59:11.535554 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 00:59:11.535560 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 00:59:11.535566 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 00:59:11.535576 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 00:59:11.535583 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 00:59:11.535611 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e08813e-a36b-44d4-8c45-37d944b877b3', 'scsi-SQEMU_QEMU_HARDDISK_6e08813e-a36b-44d4-8c45-37d944b877b3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e08813e-a36b-44d4-8c45-37d944b877b3-part1', 'scsi-SQEMU_QEMU_HARDDISK_6e08813e-a36b-44d4-8c45-37d944b877b3-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e08813e-a36b-44d4-8c45-37d944b877b3-part14', 'scsi-SQEMU_QEMU_HARDDISK_6e08813e-a36b-44d4-8c45-37d944b877b3-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e08813e-a36b-44d4-8c45-37d944b877b3-part15', 'scsi-SQEMU_QEMU_HARDDISK_6e08813e-a36b-44d4-8c45-37d944b877b3-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e08813e-a36b-44d4-8c45-37d944b877b3-part16', 'scsi-SQEMU_QEMU_HARDDISK_6e08813e-a36b-44d4-8c45-37d944b877b3-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 00:59:11.535625 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 00:59:11.535632 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--dc88f193--a403--571c--9716--867079cb0a77-osd--block--dc88f193--a403--571c--9716--867079cb0a77'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ZZigmv-scWw-dS3h-Kt7R-sNr1-R177-KHwZlS', 'scsi-0QEMU_QEMU_HARDDISK_83f9c1ee-a593-4773-9f19-cdbbc5179b15', 'scsi-SQEMU_QEMU_HARDDISK_83f9c1ee-a593-4773-9f19-cdbbc5179b15'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 00:59:11.535644 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 00:59:11.535650 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--9e905ad0--9805--5328--aec5--92944dddbd57-osd--block--9e905ad0--9805--5328--aec5--92944dddbd57'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-GfGyPa-pmmX-8xWI-43WZ-LBYv-bkS1-Kty7h3', 'scsi-0QEMU_QEMU_HARDDISK_a8e3ed1c-2f99-41d3-ad10-61535a4cd08c', 'scsi-SQEMU_QEMU_HARDDISK_a8e3ed1c-2f99-41d3-ad10-61535a4cd08c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 00:59:11.535672 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_89f9da0d-6b93-4417-9f39-e48f14dc47e8', 'scsi-SQEMU_QEMU_HARDDISK_89f9da0d-6b93-4417-9f39-e48f14dc47e8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 00:59:11.535680 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 00:59:11.535691 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-17-00-03-09-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 00:59:11.535698 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 00:59:11.535707 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dfd38aa9-0273-4d0b-842d-83e2b920901d', 'scsi-SQEMU_QEMU_HARDDISK_dfd38aa9-0273-4d0b-842d-83e2b920901d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dfd38aa9-0273-4d0b-842d-83e2b920901d-part1', 'scsi-SQEMU_QEMU_HARDDISK_dfd38aa9-0273-4d0b-842d-83e2b920901d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dfd38aa9-0273-4d0b-842d-83e2b920901d-part14', 'scsi-SQEMU_QEMU_HARDDISK_dfd38aa9-0273-4d0b-842d-83e2b920901d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dfd38aa9-0273-4d0b-842d-83e2b920901d-part15', 'scsi-SQEMU_QEMU_HARDDISK_dfd38aa9-0273-4d0b-842d-83e2b920901d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dfd38aa9-0273-4d0b-842d-83e2b920901d-part16', 'scsi-SQEMU_QEMU_HARDDISK_dfd38aa9-0273-4d0b-842d-83e2b920901d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 00:59:11.535728 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--3c41c00e--01b2--5de9--9d7e--31888b7f9771-osd--block--3c41c00e--01b2--5de9--9d7e--31888b7f9771'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-bK2hhS-QERv-lWih-5lqM-caSH-wXWc-LdaWOA', 'scsi-0QEMU_QEMU_HARDDISK_304f2e06-033e-4696-8bcf-5d7e9425b0ee', 'scsi-SQEMU_QEMU_HARDDISK_304f2e06-033e-4696-8bcf-5d7e9425b0ee'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 00:59:11.535735 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.535745 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--b1b21aa2--16de--5cd3--9497--37bc0f66c5a5-osd--block--b1b21aa2--16de--5cd3--9497--37bc0f66c5a5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-CF7sTO-CKEf-1owX-tMHc-nUDz-6un9-8O9zaO', 'scsi-0QEMU_QEMU_HARDDISK_d33e80f7-c5e3-468e-989c-76b1c28adee9', 'scsi-SQEMU_QEMU_HARDDISK_d33e80f7-c5e3-468e-989c-76b1c28adee9'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 00:59:11.535751 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fe0d5661-edac-468e-9d1d-014c3e419a65', 'scsi-SQEMU_QEMU_HARDDISK_fe0d5661-edac-468e-9d1d-014c3e419a65'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 00:59:11.535757 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-17-00-03-45-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 00:59:11.535766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 00:59:11.535773 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.535779 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 00:59:11.535785 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 00:59:11.535805 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 00:59:11.535813 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 00:59:11.535823 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 00:59:11.535829 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 00:59:11.535836 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 00:59:11.535843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 00:59:11.535852 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 00:59:11.535858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 00:59:11.535867 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 00:59:11.535874 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 00:59:11.535897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 00:59:11.536177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 00:59:11.536201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_25c1cf14-bcc6-40f1-b574-7a0e4cd20ee4', 'scsi-SQEMU_QEMU_HARDDISK_25c1cf14-bcc6-40f1-b574-7a0e4cd20ee4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_25c1cf14-bcc6-40f1-b574-7a0e4cd20ee4-part1', 'scsi-SQEMU_QEMU_HARDDISK_25c1cf14-bcc6-40f1-b574-7a0e4cd20ee4-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_25c1cf14-bcc6-40f1-b574-7a0e4cd20ee4-part14', 'scsi-SQEMU_QEMU_HARDDISK_25c1cf14-bcc6-40f1-b574-7a0e4cd20ee4-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_25c1cf14-bcc6-40f1-b574-7a0e4cd20ee4-part15', 'scsi-SQEMU_QEMU_HARDDISK_25c1cf14-bcc6-40f1-b574-7a0e4cd20ee4-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_25c1cf14-bcc6-40f1-b574-7a0e4cd20ee4-part16', 'scsi-SQEMU_QEMU_HARDDISK_25c1cf14-bcc6-40f1-b574-7a0e4cd20ee4-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 00:59:11.536209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-17-00-03-40-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 00:59:11.536215 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 00:59:11.536283 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4c015fd1-9d9e-4b8e-bc57-ed6f41b9880a', 'scsi-SQEMU_QEMU_HARDDISK_4c015fd1-9d9e-4b8e-bc57-ed6f41b9880a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4c015fd1-9d9e-4b8e-bc57-ed6f41b9880a-part1', 'scsi-SQEMU_QEMU_HARDDISK_4c015fd1-9d9e-4b8e-bc57-ed6f41b9880a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4c015fd1-9d9e-4b8e-bc57-ed6f41b9880a-part14', 'scsi-SQEMU_QEMU_HARDDISK_4c015fd1-9d9e-4b8e-bc57-ed6f41b9880a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4c015fd1-9d9e-4b8e-bc57-ed6f41b9880a-part15', 'scsi-SQEMU_QEMU_HARDDISK_4c015fd1-9d9e-4b8e-bc57-ed6f41b9880a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4c015fd1-9d9e-4b8e-bc57-ed6f41b9880a-part16', 'scsi-SQEMU_QEMU_HARDDISK_4c015fd1-9d9e-4b8e-bc57-ed6f41b9880a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 00:59:11.536301 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-17-00-03-19-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 00:59:11.536308 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.536315 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.536324 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 00:59:11.536331 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 00:59:11.536337 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 00:59:11.536343 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 00:59:11.536400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 00:59:11.536408 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 00:59:11.536414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 00:59:11.536420 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 00:59:11.536429 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2ad57a79-f08d-4fb4-9f95-65bfce46ba5e', 'scsi-SQEMU_QEMU_HARDDISK_2ad57a79-f08d-4fb4-9f95-65bfce46ba5e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2ad57a79-f08d-4fb4-9f95-65bfce46ba5e-part1', 'scsi-SQEMU_QEMU_HARDDISK_2ad57a79-f08d-4fb4-9f95-65bfce46ba5e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2ad57a79-f08d-4fb4-9f95-65bfce46ba5e-part14', 'scsi-SQEMU_QEMU_HARDDISK_2ad57a79-f08d-4fb4-9f95-65bfce46ba5e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2ad57a79-f08d-4fb4-9f95-65bfce46ba5e-part15', 'scsi-SQEMU_QEMU_HARDDISK_2ad57a79-f08d-4fb4-9f95-65bfce46ba5e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2ad57a79-f08d-4fb4-9f95-65bfce46ba5e-part16', 'scsi-SQEMU_QEMU_HARDDISK_2ad57a79-f08d-4fb4-9f95-65bfce46ba5e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 00:59:11.536480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-17-00-03-34-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 00:59:11.536490 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.536496 | orchestrator | 2026-03-17 00:59:11.536502 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-17 00:59:11.536509 | orchestrator | Tuesday 17 March 2026 00:49:14 +0000 (0:00:01.214) 0:00:34.958 ********* 2026-03-17 00:59:11.536517 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--45fdc78c--b598--5156--b36d--ba4cd7c12386-osd--block--45fdc78c--b598--5156--b36d--ba4cd7c12386', 'dm-uuid-LVM-6timofDkKT1hbgs1UiLHgm8I9lC3wjGeUFlOGZuZIlCxkSeT3VIDJBOooO84jJ4W'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:59:11.536524 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2b5d6da3--626f--5c09--a421--20ac1510e3d2-osd--block--2b5d6da3--626f--5c09--a421--20ac1510e3d2', 'dm-uuid-LVM-1aQ8jNKmNVPuSUkhlXwYUGiDvucOc05Mj89XXHB4DeQUyYPBdPYze9NHoCjcBTMD'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:59:11.536534 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:59:11.536541 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:59:11.536547 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:59:11.536619 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:59:11.536627 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--dc88f193--a403--571c--9716--867079cb0a77-osd--block--dc88f193--a403--571c--9716--867079cb0a77', 'dm-uuid-LVM-6cbvHb49d19dye5SGAJdR4tSnVL1sn3e3VYpoopj1ggoGa3fsfEBFRYtjRY42Zwu'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:59:11.536633 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:59:11.536644 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9e905ad0--9805--5328--aec5--92944dddbd57-osd--block--9e905ad0--9805--5328--aec5--92944dddbd57', 'dm-uuid-LVM-KRraYFqQ9BlELNTrII6HMgV69ppufHP8w3fspEp9JIWjJz6vi7Do1Q3YTQrhsZKv'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:59:11.536651 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:59:11.536851 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:59:11.536956 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:59:11.536969 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:59:11.536977 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:59:11.536994 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fef12aab-9308-4371-8ae3-fd48e064f393', 'scsi-SQEMU_QEMU_HARDDISK_fef12aab-9308-4371-8ae3-fd48e064f393'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fef12aab-9308-4371-8ae3-fd48e064f393-part1', 'scsi-SQEMU_QEMU_HARDDISK_fef12aab-9308-4371-8ae3-fd48e064f393-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fef12aab-9308-4371-8ae3-fd48e064f393-part14', 'scsi-SQEMU_QEMU_HARDDISK_fef12aab-9308-4371-8ae3-fd48e064f393-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fef12aab-9308-4371-8ae3-fd48e064f393-part15', 'scsi-SQEMU_QEMU_HARDDISK_fef12aab-9308-4371-8ae3-fd48e064f393-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fef12aab-9308-4371-8ae3-fd48e064f393-part16', 'scsi-SQEMU_QEMU_HARDDISK_fef12aab-9308-4371-8ae3-fd48e064f393-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:59:11.537038 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3c41c00e--01b2--5de9--9d7e--31888b7f9771-osd--block--3c41c00e--01b2--5de9--9d7e--31888b7f9771', 'dm-uuid-LVM-ysKBsJ06fJzf8zFG7udyrvhcFSNLKkXdy6bdG43Y2KDBxreI5nZ2cT5mbzpD8z3f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:59:11.537048 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--45fdc78c--b598--5156--b36d--ba4cd7c12386-osd--block--45fdc78c--b598--5156--b36d--ba4cd7c12386'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-5NLxCA-3OmV-UzBj-h29u-hGxB-8QDS-1x2KeN', 'scsi-0QEMU_QEMU_HARDDISK_f65971dd-3d8e-4ccb-8892-9cef1457b08b', 'scsi-SQEMU_QEMU_HARDDISK_f65971dd-3d8e-4ccb-8892-9cef1457b08b'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:59:11.537055 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:59:11.537064 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b1b21aa2--16de--5cd3--9497--37bc0f66c5a5-osd--block--b1b21aa2--16de--5cd3--9497--37bc0f66c5a5', 'dm-uuid-LVM-iBbCl8l2Cp21TLsab0LZb9UpEOZERcnCml6xqtK2mvkLdOPZUie3k4LRW6n3GT3Z'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:59:11.537075 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--2b5d6da3--626f--5c09--a421--20ac1510e3d2-osd--block--2b5d6da3--626f--5c09--a421--20ac1510e3d2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Cfj8Wt-Pc6p-KnzT-OxB4-bn7U-Wz17-huOjFT', 'scsi-0QEMU_QEMU_HARDDISK_8140ca94-7747-4c81-b89b-0d83b2f23451', 'scsi-SQEMU_QEMU_HARDDISK_8140ca94-7747-4c81-b89b-0d83b2f23451'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:59:11.537183 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:59:11.537227 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:59:11.537237 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:59:11.537243 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1afbae95-f964-4c90-9c71-9e7629ff9c63', 'scsi-SQEMU_QEMU_HARDDISK_1afbae95-f964-4c90-9c71-9e7629ff9c63'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:59:11.537256 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:59:11.537274 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:59:11.537413 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-17-00-03-36-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:59:11.537427 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:59:11.537436 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:59:11.537443 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:59:11.537454 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:59:11.537466 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:59:11.537472 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.537514 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dfd38aa9-0273-4d0b-842d-83e2b920901d', 'scsi-SQEMU_QEMU_HARDDISK_dfd38aa9-0273-4d0b-842d-83e2b920901d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dfd38aa9-0273-4d0b-842d-83e2b920901d-part1', 'scsi-SQEMU_QEMU_HARDDISK_dfd38aa9-0273-4d0b-842d-83e2b920901d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dfd38aa9-0273-4d0b-842d-83e2b920901d-part14', 'scsi-SQEMU_QEMU_HARDDISK_dfd38aa9-0273-4d0b-842d-83e2b920901d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dfd38aa9-0273-4d0b-842d-83e2b920901d-part15', 'scsi-SQEMU_QEMU_HARDDISK_dfd38aa9-0273-4d0b-842d-83e2b920901d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dfd38aa9-0273-4d0b-842d-83e2b920901d-part16', 'scsi-SQEMU_QEMU_HARDDISK_dfd38aa9-0273-4d0b-842d-83e2b920901d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:59:11.537524 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:59:11.537535 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--3c41c00e--01b2--5de9--9d7e--31888b7f9771-osd--block--3c41c00e--01b2--5de9--9d7e--31888b7f9771'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-bK2hhS-QERv-lWih-5lqM-caSH-wXWc-LdaWOA', 'scsi-0QEMU_QEMU_HARDDISK_304f2e06-033e-4696-8bcf-5d7e9425b0ee', 'scsi-SQEMU_QEMU_HARDDISK_304f2e06-033e-4696-8bcf-5d7e9425b0ee'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:59:11.537546 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:59:11.537585 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--b1b21aa2--16de--5cd3--9497--37bc0f66c5a5-osd--block--b1b21aa2--16de--5cd3--9497--37bc0f66c5a5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-CF7sTO-CKEf-1owX-tMHc-nUDz-6un9-8O9zaO', 'scsi-0QEMU_QEMU_HARDDISK_d33e80f7-c5e3-468e-989c-76b1c28adee9', 'scsi-SQEMU_QEMU_HARDDISK_d33e80f7-c5e3-468e-989c-76b1c28adee9'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:59:11.537595 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:59:11.537601 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:59:11.537608 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:59:11.537625 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fe0d5661-edac-468e-9d1d-014c3e419a65', 'scsi-SQEMU_QEMU_HARDDISK_fe0d5661-edac-468e-9d1d-014c3e419a65'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:59:11.537634 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:59:11.537710 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e08813e-a36b-44d4-8c45-37d944b877b3', 'scsi-SQEMU_QEMU_HARDDISK_6e08813e-a36b-44d4-8c45-37d944b877b3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e08813e-a36b-44d4-8c45-37d944b877b3-part1', 'scsi-SQEMU_QEMU_HARDDISK_6e08813e-a36b-44d4-8c45-37d944b877b3-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e08813e-a36b-44d4-8c45-37d944b877b3-part14', 'scsi-SQEMU_QEMU_HARDDISK_6e08813e-a36b-44d4-8c45-37d944b877b3-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e08813e-a36b-44d4-8c45-37d944b877b3-part15', 'scsi-SQEMU_QEMU_HARDDISK_6e08813e-a36b-44d4-8c45-37d944b877b3-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e08813e-a36b-44d4-8c45-37d944b877b3-part16', 'scsi-SQEMU_QEMU_HARDDISK_6e08813e-a36b-44d4-8c45-37d944b877b3-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:59:11.537725 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:59:11.537737 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:59:11.537744 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-17-00-03-45-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:59:11.537777 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:59:11.537786 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--dc88f193--a403--571c--9716--867079cb0a77-osd--block--dc88f193--a403--571c--9716--867079cb0a77'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ZZigmv-scWw-dS3h-Kt7R-sNr1-R177-KHwZlS', 'scsi-0QEMU_QEMU_HARDDISK_83f9c1ee-a593-4773-9f19-cdbbc5179b15', 'scsi-SQEMU_QEMU_HARDDISK_83f9c1ee-a593-4773-9f19-cdbbc5179b15'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:59:11.537792 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:59:11.537806 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:59:11.537812 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:59:11.537818 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:59:11.537854 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:59:11.537863 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--9e905ad0--9805--5328--aec5--92944dddbd57-osd--block--9e905ad0--9805--5328--aec5--92944dddbd57'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-GfGyPa-pmmX-8xWI-43WZ-LBYv-bkS1-Kty7h3', 'scsi-0QEMU_QEMU_HARDDISK_a8e3ed1c-2f99-41d3-ad10-61535a4cd08c', 'scsi-SQEMU_QEMU_HARDDISK_a8e3ed1c-2f99-41d3-ad10-61535a4cd08c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:59:11.537903 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:59:11.537931 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:59:11.537940 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:59:11.537947 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:59:11.537991 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_25c1cf14-bcc6-40f1-b574-7a0e4cd20ee4', 'scsi-SQEMU_QEMU_HARDDISK_25c1cf14-bcc6-40f1-b574-7a0e4cd20ee4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_25c1cf14-bcc6-40f1-b574-7a0e4cd20ee4-part1', 'scsi-SQEMU_QEMU_HARDDISK_25c1cf14-bcc6-40f1-b574-7a0e4cd20ee4-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_25c1cf14-bcc6-40f1-b574-7a0e4cd20ee4-part14', 'scsi-SQEMU_QEMU_HARDDISK_25c1cf14-bcc6-40f1-b574-7a0e4cd20ee4-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_25c1cf14-bcc6-40f1-b574-7a0e4cd20ee4-part15', 'scsi-SQEMU_QEMU_HARDDISK_25c1cf14-bcc6-40f1-b574-7a0e4cd20ee4-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_25c1cf14-bcc6-40f1-b574-7a0e4cd20ee4-part16', 'scsi-SQEMU_QEMU_HARDDISK_25c1cf14-bcc6-40f1-b574-7a0e4cd20ee4-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:59:11.538011 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_89f9da0d-6b93-4417-9f39-e48f14dc47e8', 'scsi-SQEMU_QEMU_HARDDISK_89f9da0d-6b93-4417-9f39-e48f14dc47e8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:59:11.538067 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-17-00-03-40-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:59:11.538105 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:59:11.538113 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-17-00-03-09-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:59:11.538120 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.538134 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4c015fd1-9d9e-4b8e-bc57-ed6f41b9880a', 'scsi-SQEMU_QEMU_HARDDISK_4c015fd1-9d9e-4b8e-bc57-ed6f41b9880a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4c015fd1-9d9e-4b8e-bc57-ed6f41b9880a-part1', 'scsi-SQEMU_QEMU_HARDDISK_4c015fd1-9d9e-4b8e-bc57-ed6f41b9880a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4c015fd1-9d9e-4b8e-bc57-ed6f41b9880a-part14', 'scsi-SQEMU_QEMU_HARDDISK_4c015fd1-9d9e-4b8e-bc57-ed6f41b9880a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4c015fd1-9d9e-4b8e-bc57-ed6f41b9880a-part15', 'scsi-SQEMU_QEMU_HARDDISK_4c015fd1-9d9e-4b8e-bc57-ed6f41b9880a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4c015fd1-9d9e-4b8e-bc57-ed6f41b9880a-part16', 'scsi-SQEMU_QEMU_HARDDISK_4c015fd1-9d9e-4b8e-bc57-ed6f41b9880a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:59:11.538147 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.538180 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-17-00-03-19-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:59:11.538188 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.538194 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.538201 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:59:11.538208 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:59:11.538219 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:59:11.538228 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:59:11.538235 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:59:11.538241 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:59:11.538314 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:59:11.538326 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:59:11.538337 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2ad57a79-f08d-4fb4-9f95-65bfce46ba5e', 'scsi-SQEMU_QEMU_HARDDISK_2ad57a79-f08d-4fb4-9f95-65bfce46ba5e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2ad57a79-f08d-4fb4-9f95-65bfce46ba5e-part1', 'scsi-SQEMU_QEMU_HARDDISK_2ad57a79-f08d-4fb4-9f95-65bfce46ba5e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2ad57a79-f08d-4fb4-9f95-65bfce46ba5e-part14', 'scsi-SQEMU_QEMU_HARDDISK_2ad57a79-f08d-4fb4-9f95-65bfce46ba5e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2ad57a79-f08d-4fb4-9f95-65bfce46ba5e-part15', 'scsi-SQEMU_QEMU_HARDDISK_2ad57a79-f08d-4fb4-9f95-65bfce46ba5e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2ad57a79-f08d-4fb4-9f95-65bfce46ba5e-part16', 'scsi-SQEMU_QEMU_HARDDISK_2ad57a79-f08d-4fb4-9f95-65bfce46ba5e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:59:11.538351 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-17-00-03-34-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:59:11.538358 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.538366 | orchestrator | 2026-03-17 00:59:11.538413 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-17 00:59:11.538422 | orchestrator | Tuesday 17 March 2026 00:49:16 +0000 (0:00:01.757) 0:00:36.716 ********* 2026-03-17 00:59:11.538429 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:59:11.538436 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:59:11.538442 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:59:11.538448 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:59:11.538458 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:59:11.538465 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:59:11.538471 | orchestrator | 2026-03-17 00:59:11.538477 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-17 00:59:11.538483 | orchestrator | Tuesday 17 March 2026 00:49:18 +0000 (0:00:01.541) 0:00:38.257 ********* 2026-03-17 00:59:11.538489 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:59:11.538494 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:59:11.538499 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:59:11.538505 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:59:11.538517 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:59:11.538522 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:59:11.538527 | orchestrator | 2026-03-17 00:59:11.538533 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-17 00:59:11.538540 | orchestrator | Tuesday 17 March 2026 00:49:18 +0000 (0:00:00.465) 0:00:38.723 ********* 2026-03-17 00:59:11.538545 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.538551 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.538556 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.538562 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.538568 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.538574 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.538580 | orchestrator | 2026-03-17 00:59:11.538586 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-17 00:59:11.538594 | orchestrator | Tuesday 17 March 2026 00:49:19 +0000 (0:00:00.841) 0:00:39.564 ********* 2026-03-17 00:59:11.538603 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.538609 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.538615 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.538621 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.538627 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.538634 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.538644 | orchestrator | 2026-03-17 00:59:11.538650 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-17 00:59:11.538657 | orchestrator | Tuesday 17 March 2026 00:49:20 +0000 (0:00:00.550) 0:00:40.114 ********* 2026-03-17 00:59:11.538662 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.538668 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.538674 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.538693 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.538704 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.538711 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.538717 | orchestrator | 2026-03-17 00:59:11.538724 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-17 00:59:11.538731 | orchestrator | Tuesday 17 March 2026 00:49:20 +0000 (0:00:00.680) 0:00:40.795 ********* 2026-03-17 00:59:11.538739 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.538748 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.538755 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.538761 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.538768 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.538774 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.538781 | orchestrator | 2026-03-17 00:59:11.538787 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-17 00:59:11.538793 | orchestrator | Tuesday 17 March 2026 00:49:21 +0000 (0:00:00.752) 0:00:41.547 ********* 2026-03-17 00:59:11.538800 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-17 00:59:11.538811 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-17 00:59:11.538822 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-17 00:59:11.538829 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-17 00:59:11.538836 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-17 00:59:11.538843 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-17 00:59:11.538850 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-17 00:59:11.538856 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-17 00:59:11.538865 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-17 00:59:11.538873 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-17 00:59:11.538881 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-03-17 00:59:11.538887 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-17 00:59:11.538894 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-03-17 00:59:11.538916 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-17 00:59:11.538924 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-03-17 00:59:11.538930 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-03-17 00:59:11.538936 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-17 00:59:11.538941 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-17 00:59:11.538947 | orchestrator | 2026-03-17 00:59:11.538952 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-17 00:59:11.538957 | orchestrator | Tuesday 17 March 2026 00:49:24 +0000 (0:00:03.126) 0:00:44.674 ********* 2026-03-17 00:59:11.538962 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-17 00:59:11.538968 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-17 00:59:11.538973 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-17 00:59:11.538979 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.538985 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-17 00:59:11.538991 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-17 00:59:11.538995 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-17 00:59:11.538998 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.539004 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-17 00:59:11.539041 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-17 00:59:11.539050 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-17 00:59:11.539056 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.539062 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-17 00:59:11.539069 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-17 00:59:11.539075 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-17 00:59:11.539081 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-17 00:59:11.539088 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-17 00:59:11.539095 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-17 00:59:11.539101 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.539108 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.539115 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-17 00:59:11.539121 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-17 00:59:11.539127 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-17 00:59:11.539133 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.539138 | orchestrator | 2026-03-17 00:59:11.539142 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-17 00:59:11.539147 | orchestrator | Tuesday 17 March 2026 00:49:25 +0000 (0:00:01.006) 0:00:45.681 ********* 2026-03-17 00:59:11.539151 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.539155 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.539159 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.539164 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:59:11.539168 | orchestrator | 2026-03-17 00:59:11.539173 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-17 00:59:11.539178 | orchestrator | Tuesday 17 March 2026 00:49:26 +0000 (0:00:00.897) 0:00:46.578 ********* 2026-03-17 00:59:11.539182 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.539187 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.539191 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.539195 | orchestrator | 2026-03-17 00:59:11.539199 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-17 00:59:11.539204 | orchestrator | Tuesday 17 March 2026 00:49:27 +0000 (0:00:00.481) 0:00:47.059 ********* 2026-03-17 00:59:11.539215 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.539222 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.539231 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.539239 | orchestrator | 2026-03-17 00:59:11.539244 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-17 00:59:11.539250 | orchestrator | Tuesday 17 March 2026 00:49:27 +0000 (0:00:00.394) 0:00:47.454 ********* 2026-03-17 00:59:11.539256 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.539262 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.539268 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.539273 | orchestrator | 2026-03-17 00:59:11.539279 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-17 00:59:11.539285 | orchestrator | Tuesday 17 March 2026 00:49:28 +0000 (0:00:00.650) 0:00:48.105 ********* 2026-03-17 00:59:11.539292 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:59:11.539298 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:59:11.539304 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:59:11.539310 | orchestrator | 2026-03-17 00:59:11.539316 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-17 00:59:11.539327 | orchestrator | Tuesday 17 March 2026 00:49:28 +0000 (0:00:00.655) 0:00:48.760 ********* 2026-03-17 00:59:11.539334 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-17 00:59:11.539341 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-17 00:59:11.539347 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-17 00:59:11.539356 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.539364 | orchestrator | 2026-03-17 00:59:11.539370 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-17 00:59:11.539376 | orchestrator | Tuesday 17 March 2026 00:49:29 +0000 (0:00:00.365) 0:00:49.126 ********* 2026-03-17 00:59:11.539382 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-17 00:59:11.539388 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-17 00:59:11.539394 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-17 00:59:11.539400 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.539406 | orchestrator | 2026-03-17 00:59:11.539412 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-17 00:59:11.539418 | orchestrator | Tuesday 17 March 2026 00:49:29 +0000 (0:00:00.350) 0:00:49.476 ********* 2026-03-17 00:59:11.539425 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-17 00:59:11.539431 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-17 00:59:11.539437 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-17 00:59:11.539444 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.539450 | orchestrator | 2026-03-17 00:59:11.539457 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-17 00:59:11.539463 | orchestrator | Tuesday 17 March 2026 00:49:29 +0000 (0:00:00.413) 0:00:49.890 ********* 2026-03-17 00:59:11.539469 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:59:11.539479 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:59:11.539487 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:59:11.539493 | orchestrator | 2026-03-17 00:59:11.539499 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-17 00:59:11.539505 | orchestrator | Tuesday 17 March 2026 00:49:30 +0000 (0:00:00.477) 0:00:50.368 ********* 2026-03-17 00:59:11.539511 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-17 00:59:11.539517 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-17 00:59:11.539553 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-17 00:59:11.539560 | orchestrator | 2026-03-17 00:59:11.539566 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-17 00:59:11.539572 | orchestrator | Tuesday 17 March 2026 00:49:31 +0000 (0:00:00.882) 0:00:51.251 ********* 2026-03-17 00:59:11.539580 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-17 00:59:11.539599 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-17 00:59:11.539607 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-17 00:59:11.539615 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-17 00:59:11.539624 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-17 00:59:11.539630 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-17 00:59:11.539637 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-17 00:59:11.539644 | orchestrator | 2026-03-17 00:59:11.539650 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-17 00:59:11.539656 | orchestrator | Tuesday 17 March 2026 00:49:32 +0000 (0:00:00.938) 0:00:52.189 ********* 2026-03-17 00:59:11.539663 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-17 00:59:11.539667 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-17 00:59:11.539670 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-17 00:59:11.539675 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-17 00:59:11.539681 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-17 00:59:11.539690 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-17 00:59:11.539697 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-17 00:59:11.539703 | orchestrator | 2026-03-17 00:59:11.539709 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-17 00:59:11.539715 | orchestrator | Tuesday 17 March 2026 00:49:33 +0000 (0:00:01.709) 0:00:53.899 ********* 2026-03-17 00:59:11.539721 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:59:11.539728 | orchestrator | 2026-03-17 00:59:11.539733 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-17 00:59:11.539739 | orchestrator | Tuesday 17 March 2026 00:49:34 +0000 (0:00:00.982) 0:00:54.881 ********* 2026-03-17 00:59:11.539744 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:59:11.539750 | orchestrator | 2026-03-17 00:59:11.539755 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-17 00:59:11.539761 | orchestrator | Tuesday 17 March 2026 00:49:35 +0000 (0:00:00.980) 0:00:55.862 ********* 2026-03-17 00:59:11.539767 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.539773 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.539784 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.539789 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:59:11.539795 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:59:11.539801 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:59:11.539807 | orchestrator | 2026-03-17 00:59:11.539813 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-17 00:59:11.539820 | orchestrator | Tuesday 17 March 2026 00:49:37 +0000 (0:00:01.429) 0:00:57.291 ********* 2026-03-17 00:59:11.539826 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:59:11.539832 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.539838 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:59:11.539844 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.539853 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.539861 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:59:11.539867 | orchestrator | 2026-03-17 00:59:11.539873 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-17 00:59:11.539885 | orchestrator | Tuesday 17 March 2026 00:49:38 +0000 (0:00:01.441) 0:00:58.733 ********* 2026-03-17 00:59:11.539891 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.539896 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.539901 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:59:11.539937 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:59:11.539944 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:59:11.539950 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.539955 | orchestrator | 2026-03-17 00:59:11.539960 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-17 00:59:11.539966 | orchestrator | Tuesday 17 March 2026 00:49:39 +0000 (0:00:01.164) 0:00:59.898 ********* 2026-03-17 00:59:11.539972 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:59:11.539978 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:59:11.539984 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.539990 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.539996 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:59:11.540002 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.540008 | orchestrator | 2026-03-17 00:59:11.540014 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-17 00:59:11.540021 | orchestrator | Tuesday 17 March 2026 00:49:41 +0000 (0:00:02.042) 0:01:01.941 ********* 2026-03-17 00:59:11.540027 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.540034 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.540040 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.540046 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:59:11.540052 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:59:11.540090 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:59:11.540099 | orchestrator | 2026-03-17 00:59:11.540106 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-17 00:59:11.540112 | orchestrator | Tuesday 17 March 2026 00:49:43 +0000 (0:00:01.598) 0:01:03.540 ********* 2026-03-17 00:59:11.540119 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.540125 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.540130 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.540139 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.540146 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.540152 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.540158 | orchestrator | 2026-03-17 00:59:11.540164 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-17 00:59:11.540170 | orchestrator | Tuesday 17 March 2026 00:49:45 +0000 (0:00:01.948) 0:01:05.489 ********* 2026-03-17 00:59:11.540176 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.540182 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.540187 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.540193 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.540199 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.540205 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.540211 | orchestrator | 2026-03-17 00:59:11.540217 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-17 00:59:11.540223 | orchestrator | Tuesday 17 March 2026 00:49:46 +0000 (0:00:01.479) 0:01:06.968 ********* 2026-03-17 00:59:11.540230 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:59:11.540235 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:59:11.540240 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:59:11.540246 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:59:11.540252 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:59:11.540257 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:59:11.540263 | orchestrator | 2026-03-17 00:59:11.540269 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-17 00:59:11.540275 | orchestrator | Tuesday 17 March 2026 00:49:49 +0000 (0:00:02.259) 0:01:09.228 ********* 2026-03-17 00:59:11.540281 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:59:11.540287 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:59:11.540306 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:59:11.540312 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:59:11.540318 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:59:11.540324 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:59:11.540330 | orchestrator | 2026-03-17 00:59:11.540336 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-17 00:59:11.540342 | orchestrator | Tuesday 17 March 2026 00:49:50 +0000 (0:00:01.116) 0:01:10.345 ********* 2026-03-17 00:59:11.540348 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.540354 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.540360 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.540366 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.540372 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.540378 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.540383 | orchestrator | 2026-03-17 00:59:11.540389 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-17 00:59:11.540396 | orchestrator | Tuesday 17 March 2026 00:49:51 +0000 (0:00:01.356) 0:01:11.701 ********* 2026-03-17 00:59:11.540402 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.540408 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.540414 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.540420 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:59:11.540429 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:59:11.540437 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:59:11.540443 | orchestrator | 2026-03-17 00:59:11.540448 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-17 00:59:11.540454 | orchestrator | Tuesday 17 March 2026 00:49:52 +0000 (0:00:00.687) 0:01:12.389 ********* 2026-03-17 00:59:11.540460 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:59:11.540466 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:59:11.540480 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:59:11.540487 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.540493 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.540499 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.540505 | orchestrator | 2026-03-17 00:59:11.540511 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-17 00:59:11.540520 | orchestrator | Tuesday 17 March 2026 00:49:53 +0000 (0:00:01.385) 0:01:13.774 ********* 2026-03-17 00:59:11.540527 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:59:11.540534 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:59:11.540540 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.540545 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.540551 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:59:11.540557 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.540562 | orchestrator | 2026-03-17 00:59:11.540568 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-17 00:59:11.540575 | orchestrator | Tuesday 17 March 2026 00:49:54 +0000 (0:00:01.012) 0:01:14.787 ********* 2026-03-17 00:59:11.540581 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:59:11.540587 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:59:11.540593 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:59:11.540599 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.540607 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.540614 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.540620 | orchestrator | 2026-03-17 00:59:11.540626 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-17 00:59:11.540632 | orchestrator | Tuesday 17 March 2026 00:49:56 +0000 (0:00:01.470) 0:01:16.257 ********* 2026-03-17 00:59:11.540638 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.540644 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.540650 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.540656 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.540663 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.540669 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.540681 | orchestrator | 2026-03-17 00:59:11.540687 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-17 00:59:11.540693 | orchestrator | Tuesday 17 March 2026 00:49:57 +0000 (0:00:00.996) 0:01:17.254 ********* 2026-03-17 00:59:11.540699 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.540704 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.540710 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.540715 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.540756 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.540764 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.540769 | orchestrator | 2026-03-17 00:59:11.540775 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-17 00:59:11.540782 | orchestrator | Tuesday 17 March 2026 00:49:58 +0000 (0:00:01.440) 0:01:18.694 ********* 2026-03-17 00:59:11.540787 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.540792 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.540798 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.540804 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:59:11.540809 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:59:11.540816 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:59:11.540822 | orchestrator | 2026-03-17 00:59:11.540828 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-17 00:59:11.540833 | orchestrator | Tuesday 17 March 2026 00:49:59 +0000 (0:00:01.069) 0:01:19.764 ********* 2026-03-17 00:59:11.540839 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:59:11.540845 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:59:11.540851 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:59:11.540858 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:59:11.540864 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:59:11.540870 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:59:11.540876 | orchestrator | 2026-03-17 00:59:11.540882 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-17 00:59:11.540888 | orchestrator | Tuesday 17 March 2026 00:50:01 +0000 (0:00:01.619) 0:01:21.384 ********* 2026-03-17 00:59:11.540895 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:59:11.540901 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:59:11.540925 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:59:11.540932 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:59:11.540937 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:59:11.540943 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:59:11.540949 | orchestrator | 2026-03-17 00:59:11.540955 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-17 00:59:11.540962 | orchestrator | Tuesday 17 March 2026 00:50:03 +0000 (0:00:01.676) 0:01:23.061 ********* 2026-03-17 00:59:11.540970 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:59:11.540979 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:59:11.540986 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:59:11.540992 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:59:11.540998 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:59:11.541004 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:59:11.541009 | orchestrator | 2026-03-17 00:59:11.541016 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-17 00:59:11.541022 | orchestrator | Tuesday 17 March 2026 00:50:04 +0000 (0:00:01.719) 0:01:24.780 ********* 2026-03-17 00:59:11.541028 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:59:11.541033 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:59:11.541040 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:59:11.541045 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:59:11.541051 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:59:11.541057 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:59:11.541064 | orchestrator | 2026-03-17 00:59:11.541073 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-17 00:59:11.541081 | orchestrator | Tuesday 17 March 2026 00:50:07 +0000 (0:00:03.215) 0:01:27.996 ********* 2026-03-17 00:59:11.541088 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:59:11.541102 | orchestrator | 2026-03-17 00:59:11.541108 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-17 00:59:11.541114 | orchestrator | Tuesday 17 March 2026 00:50:09 +0000 (0:00:01.731) 0:01:29.727 ********* 2026-03-17 00:59:11.541120 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.541126 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.541137 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.541146 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.541153 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.541159 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.541165 | orchestrator | 2026-03-17 00:59:11.541174 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-17 00:59:11.541182 | orchestrator | Tuesday 17 March 2026 00:50:10 +0000 (0:00:00.813) 0:01:30.541 ********* 2026-03-17 00:59:11.541188 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.541194 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.541201 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.541207 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.541216 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.541224 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.541229 | orchestrator | 2026-03-17 00:59:11.541236 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-17 00:59:11.541242 | orchestrator | Tuesday 17 March 2026 00:50:11 +0000 (0:00:01.065) 0:01:31.606 ********* 2026-03-17 00:59:11.541248 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-17 00:59:11.541254 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-17 00:59:11.541261 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-17 00:59:11.541267 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-17 00:59:11.541273 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-17 00:59:11.541280 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-17 00:59:11.541287 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-17 00:59:11.541294 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-17 00:59:11.541301 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-17 00:59:11.541308 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-17 00:59:11.541350 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-17 00:59:11.541359 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-17 00:59:11.541366 | orchestrator | 2026-03-17 00:59:11.541372 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-17 00:59:11.541379 | orchestrator | Tuesday 17 March 2026 00:50:13 +0000 (0:00:01.446) 0:01:33.053 ********* 2026-03-17 00:59:11.541384 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:59:11.541390 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:59:11.541396 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:59:11.541402 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:59:11.541408 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:59:11.541414 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:59:11.541420 | orchestrator | 2026-03-17 00:59:11.541427 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-17 00:59:11.541436 | orchestrator | Tuesday 17 March 2026 00:50:14 +0000 (0:00:01.270) 0:01:34.323 ********* 2026-03-17 00:59:11.541443 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.541456 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.541463 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.541469 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.541475 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.541481 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.541487 | orchestrator | 2026-03-17 00:59:11.541493 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-17 00:59:11.541499 | orchestrator | Tuesday 17 March 2026 00:50:15 +0000 (0:00:00.728) 0:01:35.051 ********* 2026-03-17 00:59:11.541505 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.541512 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.541518 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.541524 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.541530 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.541536 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.541541 | orchestrator | 2026-03-17 00:59:11.541548 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-17 00:59:11.541554 | orchestrator | Tuesday 17 March 2026 00:50:16 +0000 (0:00:01.244) 0:01:36.296 ********* 2026-03-17 00:59:11.541560 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.541566 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.541573 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.541577 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.541580 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.541584 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.541588 | orchestrator | 2026-03-17 00:59:11.541592 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-17 00:59:11.541596 | orchestrator | Tuesday 17 March 2026 00:50:16 +0000 (0:00:00.657) 0:01:36.954 ********* 2026-03-17 00:59:11.541600 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:59:11.541604 | orchestrator | 2026-03-17 00:59:11.541608 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-17 00:59:11.541612 | orchestrator | Tuesday 17 March 2026 00:50:18 +0000 (0:00:01.404) 0:01:38.358 ********* 2026-03-17 00:59:11.541615 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:59:11.541620 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:59:11.541623 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:59:11.541627 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:59:11.541632 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:59:11.541640 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:59:11.541648 | orchestrator | 2026-03-17 00:59:11.541659 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-17 00:59:11.541665 | orchestrator | Tuesday 17 March 2026 00:51:12 +0000 (0:00:53.846) 0:02:32.205 ********* 2026-03-17 00:59:11.541672 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-17 00:59:11.541678 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-17 00:59:11.541684 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-17 00:59:11.541689 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.541695 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-17 00:59:11.541701 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-17 00:59:11.541706 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-17 00:59:11.541713 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.541718 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-17 00:59:11.541724 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-17 00:59:11.541730 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-17 00:59:11.541744 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.541750 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-17 00:59:11.541756 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-17 00:59:11.541763 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-17 00:59:11.541768 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.541774 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-17 00:59:11.541780 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-17 00:59:11.541785 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-17 00:59:11.541792 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.541831 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-17 00:59:11.541840 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-17 00:59:11.541847 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-17 00:59:11.541853 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.541859 | orchestrator | 2026-03-17 00:59:11.541865 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-17 00:59:11.541871 | orchestrator | Tuesday 17 March 2026 00:51:12 +0000 (0:00:00.549) 0:02:32.755 ********* 2026-03-17 00:59:11.541877 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.541883 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.541889 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.541899 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.541920 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.541928 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.541934 | orchestrator | 2026-03-17 00:59:11.541941 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-17 00:59:11.541946 | orchestrator | Tuesday 17 March 2026 00:51:13 +0000 (0:00:00.696) 0:02:33.452 ********* 2026-03-17 00:59:11.541952 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.541957 | orchestrator | 2026-03-17 00:59:11.541963 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-17 00:59:11.541969 | orchestrator | Tuesday 17 March 2026 00:51:13 +0000 (0:00:00.121) 0:02:33.573 ********* 2026-03-17 00:59:11.541976 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.541981 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.541987 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.541993 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.541999 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.542005 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.542011 | orchestrator | 2026-03-17 00:59:11.542042 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-17 00:59:11.542049 | orchestrator | Tuesday 17 March 2026 00:51:14 +0000 (0:00:00.740) 0:02:34.313 ********* 2026-03-17 00:59:11.542055 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.542061 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.542067 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.542073 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.542079 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.542085 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.542091 | orchestrator | 2026-03-17 00:59:11.542097 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-17 00:59:11.542103 | orchestrator | Tuesday 17 March 2026 00:51:15 +0000 (0:00:00.846) 0:02:35.160 ********* 2026-03-17 00:59:11.542110 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.542115 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.542121 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.542128 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.542142 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.542149 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.542155 | orchestrator | 2026-03-17 00:59:11.542161 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-17 00:59:11.542167 | orchestrator | Tuesday 17 March 2026 00:51:15 +0000 (0:00:00.625) 0:02:35.786 ********* 2026-03-17 00:59:11.542173 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:59:11.542179 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:59:11.542185 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:59:11.542191 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:59:11.542197 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:59:11.542203 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:59:11.542209 | orchestrator | 2026-03-17 00:59:11.542215 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-17 00:59:11.542222 | orchestrator | Tuesday 17 March 2026 00:51:19 +0000 (0:00:03.551) 0:02:39.337 ********* 2026-03-17 00:59:11.542232 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:59:11.542238 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:59:11.542244 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:59:11.542250 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:59:11.542256 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:59:11.542262 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:59:11.542268 | orchestrator | 2026-03-17 00:59:11.542274 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-17 00:59:11.542280 | orchestrator | Tuesday 17 March 2026 00:51:19 +0000 (0:00:00.537) 0:02:39.875 ********* 2026-03-17 00:59:11.542287 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:59:11.542294 | orchestrator | 2026-03-17 00:59:11.542300 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-17 00:59:11.542306 | orchestrator | Tuesday 17 March 2026 00:51:21 +0000 (0:00:01.242) 0:02:41.117 ********* 2026-03-17 00:59:11.542312 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.542318 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.542324 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.542330 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.542336 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.542342 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.542348 | orchestrator | 2026-03-17 00:59:11.542354 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-17 00:59:11.542360 | orchestrator | Tuesday 17 March 2026 00:51:21 +0000 (0:00:00.663) 0:02:41.781 ********* 2026-03-17 00:59:11.542366 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.542372 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.542378 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.542384 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.542390 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.542396 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.542403 | orchestrator | 2026-03-17 00:59:11.542409 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-17 00:59:11.542415 | orchestrator | Tuesday 17 March 2026 00:51:22 +0000 (0:00:00.836) 0:02:42.617 ********* 2026-03-17 00:59:11.542421 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.542427 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.542460 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.542467 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.542473 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.542479 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.542485 | orchestrator | 2026-03-17 00:59:11.542491 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-17 00:59:11.542497 | orchestrator | Tuesday 17 March 2026 00:51:23 +0000 (0:00:00.649) 0:02:43.266 ********* 2026-03-17 00:59:11.542503 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.542514 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.542520 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.542526 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.542532 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.542541 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.542548 | orchestrator | 2026-03-17 00:59:11.542556 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-17 00:59:11.542564 | orchestrator | Tuesday 17 March 2026 00:51:24 +0000 (0:00:00.812) 0:02:44.079 ********* 2026-03-17 00:59:11.542570 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.542576 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.542581 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.542588 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.542594 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.542599 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.542605 | orchestrator | 2026-03-17 00:59:11.542611 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-17 00:59:11.542617 | orchestrator | Tuesday 17 March 2026 00:51:24 +0000 (0:00:00.708) 0:02:44.787 ********* 2026-03-17 00:59:11.542623 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.542629 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.542636 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.542641 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.542646 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.542651 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.542657 | orchestrator | 2026-03-17 00:59:11.542662 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-17 00:59:11.542668 | orchestrator | Tuesday 17 March 2026 00:51:25 +0000 (0:00:00.851) 0:02:45.639 ********* 2026-03-17 00:59:11.542674 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.542679 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.542686 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.542692 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.542698 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.542704 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.542709 | orchestrator | 2026-03-17 00:59:11.542715 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-17 00:59:11.542720 | orchestrator | Tuesday 17 March 2026 00:51:26 +0000 (0:00:00.520) 0:02:46.159 ********* 2026-03-17 00:59:11.542725 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.542731 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.542737 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.542744 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.542750 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.542756 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.542762 | orchestrator | 2026-03-17 00:59:11.542769 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-17 00:59:11.542775 | orchestrator | Tuesday 17 March 2026 00:51:26 +0000 (0:00:00.737) 0:02:46.896 ********* 2026-03-17 00:59:11.542782 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:59:11.542788 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:59:11.542794 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:59:11.542801 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:59:11.542807 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:59:11.542813 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:59:11.542819 | orchestrator | 2026-03-17 00:59:11.542826 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-17 00:59:11.542838 | orchestrator | Tuesday 17 March 2026 00:51:27 +0000 (0:00:01.065) 0:02:47.962 ********* 2026-03-17 00:59:11.542844 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:59:11.542852 | orchestrator | 2026-03-17 00:59:11.542858 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-17 00:59:11.542871 | orchestrator | Tuesday 17 March 2026 00:51:28 +0000 (0:00:01.021) 0:02:48.984 ********* 2026-03-17 00:59:11.542877 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-03-17 00:59:11.542885 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-03-17 00:59:11.542891 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-03-17 00:59:11.542897 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-03-17 00:59:11.542936 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-03-17 00:59:11.542945 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-03-17 00:59:11.542952 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-03-17 00:59:11.542958 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-03-17 00:59:11.542965 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-03-17 00:59:11.542972 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-03-17 00:59:11.542978 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-03-17 00:59:11.542983 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-03-17 00:59:11.542989 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-03-17 00:59:11.542995 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-03-17 00:59:11.543001 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-03-17 00:59:11.543007 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-03-17 00:59:11.543014 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-03-17 00:59:11.543021 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-03-17 00:59:11.543062 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-03-17 00:59:11.543070 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-03-17 00:59:11.543076 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-03-17 00:59:11.543082 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-03-17 00:59:11.543090 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-03-17 00:59:11.543096 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-03-17 00:59:11.543104 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-03-17 00:59:11.543111 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-03-17 00:59:11.543118 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-03-17 00:59:11.543125 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-03-17 00:59:11.543132 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-03-17 00:59:11.543138 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-03-17 00:59:11.543144 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-03-17 00:59:11.543151 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-03-17 00:59:11.543157 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-03-17 00:59:11.543163 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-03-17 00:59:11.543170 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-03-17 00:59:11.543177 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-03-17 00:59:11.543184 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-03-17 00:59:11.543190 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-03-17 00:59:11.543195 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-03-17 00:59:11.543202 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-03-17 00:59:11.543208 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-03-17 00:59:11.543214 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-03-17 00:59:11.543220 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-03-17 00:59:11.543235 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-03-17 00:59:11.543246 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-03-17 00:59:11.543253 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-03-17 00:59:11.543258 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-03-17 00:59:11.543265 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-03-17 00:59:11.543271 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-17 00:59:11.543277 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-17 00:59:11.543283 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-17 00:59:11.543289 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-17 00:59:11.543296 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-17 00:59:11.543302 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-17 00:59:11.543308 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-17 00:59:11.543319 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-17 00:59:11.543325 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-17 00:59:11.543331 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-17 00:59:11.543337 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-17 00:59:11.543343 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-17 00:59:11.543349 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-17 00:59:11.543354 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-17 00:59:11.543361 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-17 00:59:11.543367 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-17 00:59:11.543373 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-17 00:59:11.543380 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-17 00:59:11.543386 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-17 00:59:11.543392 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-17 00:59:11.543398 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-17 00:59:11.543404 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-17 00:59:11.543411 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-17 00:59:11.543416 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-17 00:59:11.543423 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-17 00:59:11.543429 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-17 00:59:11.543435 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-17 00:59:11.543444 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-17 00:59:11.543479 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-17 00:59:11.543486 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-17 00:59:11.543493 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-17 00:59:11.543499 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-17 00:59:11.543505 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-17 00:59:11.543511 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-17 00:59:11.543523 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-17 00:59:11.543530 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-17 00:59:11.543536 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-03-17 00:59:11.543543 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-03-17 00:59:11.543549 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-03-17 00:59:11.543555 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-03-17 00:59:11.543561 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-03-17 00:59:11.543568 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-03-17 00:59:11.543574 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-03-17 00:59:11.543580 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-03-17 00:59:11.543586 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-03-17 00:59:11.543592 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-03-17 00:59:11.543598 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-03-17 00:59:11.543604 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-03-17 00:59:11.543610 | orchestrator | 2026-03-17 00:59:11.543616 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-17 00:59:11.543622 | orchestrator | Tuesday 17 March 2026 00:51:34 +0000 (0:00:05.629) 0:02:54.613 ********* 2026-03-17 00:59:11.543632 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.543638 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.543641 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.543646 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:59:11.543650 | orchestrator | 2026-03-17 00:59:11.543654 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-03-17 00:59:11.543657 | orchestrator | Tuesday 17 March 2026 00:51:35 +0000 (0:00:00.936) 0:02:55.550 ********* 2026-03-17 00:59:11.543661 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-17 00:59:11.543665 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-17 00:59:11.543669 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-17 00:59:11.543673 | orchestrator | 2026-03-17 00:59:11.543676 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-03-17 00:59:11.543680 | orchestrator | Tuesday 17 March 2026 00:51:36 +0000 (0:00:01.000) 0:02:56.550 ********* 2026-03-17 00:59:11.543691 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-17 00:59:11.543695 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-17 00:59:11.543698 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-17 00:59:11.543702 | orchestrator | 2026-03-17 00:59:11.543706 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-17 00:59:11.543710 | orchestrator | Tuesday 17 March 2026 00:51:38 +0000 (0:00:01.559) 0:02:58.110 ********* 2026-03-17 00:59:11.543713 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:59:11.543718 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:59:11.543724 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:59:11.543730 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.543739 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.543747 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.543757 | orchestrator | 2026-03-17 00:59:11.543763 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-17 00:59:11.543769 | orchestrator | Tuesday 17 March 2026 00:51:38 +0000 (0:00:00.541) 0:02:58.651 ********* 2026-03-17 00:59:11.543775 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:59:11.543780 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:59:11.543785 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:59:11.543792 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.543797 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.543803 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.543811 | orchestrator | 2026-03-17 00:59:11.543820 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-17 00:59:11.543826 | orchestrator | Tuesday 17 March 2026 00:51:39 +0000 (0:00:00.463) 0:02:59.115 ********* 2026-03-17 00:59:11.543832 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.543839 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.543845 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.543852 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.543858 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.543865 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.543871 | orchestrator | 2026-03-17 00:59:11.543919 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-17 00:59:11.543932 | orchestrator | Tuesday 17 March 2026 00:51:39 +0000 (0:00:00.768) 0:02:59.884 ********* 2026-03-17 00:59:11.543939 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.543945 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.543952 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.543959 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.543965 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.543971 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.543977 | orchestrator | 2026-03-17 00:59:11.543983 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-17 00:59:11.543990 | orchestrator | Tuesday 17 March 2026 00:51:40 +0000 (0:00:00.545) 0:03:00.429 ********* 2026-03-17 00:59:11.543996 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.544002 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.544009 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.544017 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.544024 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.544034 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.544041 | orchestrator | 2026-03-17 00:59:11.544048 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-17 00:59:11.544054 | orchestrator | Tuesday 17 March 2026 00:51:41 +0000 (0:00:00.646) 0:03:01.076 ********* 2026-03-17 00:59:11.544060 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.544067 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.544073 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.544079 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.544086 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.544093 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.544102 | orchestrator | 2026-03-17 00:59:11.544111 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-17 00:59:11.544117 | orchestrator | Tuesday 17 March 2026 00:51:41 +0000 (0:00:00.574) 0:03:01.650 ********* 2026-03-17 00:59:11.544124 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.544131 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.544139 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.544148 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.544155 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.544161 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.544168 | orchestrator | 2026-03-17 00:59:11.544175 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-17 00:59:11.544188 | orchestrator | Tuesday 17 March 2026 00:51:42 +0000 (0:00:00.654) 0:03:02.305 ********* 2026-03-17 00:59:11.544198 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.544205 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.544212 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.544219 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.544225 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.544232 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.544239 | orchestrator | 2026-03-17 00:59:11.544246 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-17 00:59:11.544253 | orchestrator | Tuesday 17 March 2026 00:51:42 +0000 (0:00:00.485) 0:03:02.790 ********* 2026-03-17 00:59:11.544260 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.544266 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.544271 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.544275 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:59:11.544279 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:59:11.544284 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:59:11.544291 | orchestrator | 2026-03-17 00:59:11.544297 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-17 00:59:11.544303 | orchestrator | Tuesday 17 March 2026 00:51:46 +0000 (0:00:03.257) 0:03:06.048 ********* 2026-03-17 00:59:11.544309 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:59:11.544315 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:59:11.544322 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:59:11.544331 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.544338 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.544344 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.544350 | orchestrator | 2026-03-17 00:59:11.544356 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-17 00:59:11.544363 | orchestrator | Tuesday 17 March 2026 00:51:46 +0000 (0:00:00.569) 0:03:06.617 ********* 2026-03-17 00:59:11.544369 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:59:11.544375 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:59:11.544381 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:59:11.544387 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.544395 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.544404 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.544410 | orchestrator | 2026-03-17 00:59:11.544416 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-17 00:59:11.544423 | orchestrator | Tuesday 17 March 2026 00:51:47 +0000 (0:00:00.768) 0:03:07.386 ********* 2026-03-17 00:59:11.544428 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.544435 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.544441 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.544448 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.544454 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.544460 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.544467 | orchestrator | 2026-03-17 00:59:11.544473 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-17 00:59:11.544481 | orchestrator | Tuesday 17 March 2026 00:51:47 +0000 (0:00:00.592) 0:03:07.978 ********* 2026-03-17 00:59:11.544490 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-17 00:59:11.544498 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-17 00:59:11.544504 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-17 00:59:11.544510 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.544539 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.544545 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.544551 | orchestrator | 2026-03-17 00:59:11.544557 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-17 00:59:11.544571 | orchestrator | Tuesday 17 March 2026 00:51:48 +0000 (0:00:00.792) 0:03:08.771 ********* 2026-03-17 00:59:11.544582 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-03-17 00:59:11.544591 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-03-17 00:59:11.544598 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-03-17 00:59:11.544605 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-03-17 00:59:11.544611 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.544618 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-03-17 00:59:11.544624 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-03-17 00:59:11.544631 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.544637 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.544643 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.544649 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.544658 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.544666 | orchestrator | 2026-03-17 00:59:11.544729 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-17 00:59:11.544757 | orchestrator | Tuesday 17 March 2026 00:51:49 +0000 (0:00:00.650) 0:03:09.422 ********* 2026-03-17 00:59:11.544764 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.544769 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.544775 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.544780 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.544785 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.544790 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.544796 | orchestrator | 2026-03-17 00:59:11.544801 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-17 00:59:11.544806 | orchestrator | Tuesday 17 March 2026 00:51:50 +0000 (0:00:00.883) 0:03:10.306 ********* 2026-03-17 00:59:11.544811 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.544816 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.544821 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.544827 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.544833 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.544839 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.544845 | orchestrator | 2026-03-17 00:59:11.544851 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-17 00:59:11.544862 | orchestrator | Tuesday 17 March 2026 00:51:50 +0000 (0:00:00.628) 0:03:10.934 ********* 2026-03-17 00:59:11.544868 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.544873 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.544878 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.544884 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.544890 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.544900 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.544919 | orchestrator | 2026-03-17 00:59:11.544925 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-17 00:59:11.544931 | orchestrator | Tuesday 17 March 2026 00:51:51 +0000 (0:00:00.948) 0:03:11.883 ********* 2026-03-17 00:59:11.544937 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.544943 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.544948 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.544954 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.544959 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.544964 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.544970 | orchestrator | 2026-03-17 00:59:11.544975 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-17 00:59:11.545016 | orchestrator | Tuesday 17 March 2026 00:51:52 +0000 (0:00:00.697) 0:03:12.581 ********* 2026-03-17 00:59:11.545024 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.545029 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.545035 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.545041 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.545047 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.545053 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.545059 | orchestrator | 2026-03-17 00:59:11.545066 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-17 00:59:11.545072 | orchestrator | Tuesday 17 March 2026 00:51:53 +0000 (0:00:00.747) 0:03:13.329 ********* 2026-03-17 00:59:11.545079 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:59:11.545085 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:59:11.545091 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.545096 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:59:11.545100 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.545104 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.545108 | orchestrator | 2026-03-17 00:59:11.545112 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-17 00:59:11.545116 | orchestrator | Tuesday 17 March 2026 00:51:54 +0000 (0:00:00.726) 0:03:14.055 ********* 2026-03-17 00:59:11.545123 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-17 00:59:11.545128 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-17 00:59:11.545135 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-17 00:59:11.545140 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.545144 | orchestrator | 2026-03-17 00:59:11.545147 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-17 00:59:11.545151 | orchestrator | Tuesday 17 March 2026 00:51:54 +0000 (0:00:00.383) 0:03:14.439 ********* 2026-03-17 00:59:11.545155 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-17 00:59:11.545159 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-17 00:59:11.545163 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-17 00:59:11.545167 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.545170 | orchestrator | 2026-03-17 00:59:11.545174 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-17 00:59:11.545178 | orchestrator | Tuesday 17 March 2026 00:51:55 +0000 (0:00:00.621) 0:03:15.060 ********* 2026-03-17 00:59:11.545182 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-17 00:59:11.545185 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-17 00:59:11.545195 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-17 00:59:11.545198 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.545202 | orchestrator | 2026-03-17 00:59:11.545206 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-17 00:59:11.545210 | orchestrator | Tuesday 17 March 2026 00:51:55 +0000 (0:00:00.560) 0:03:15.621 ********* 2026-03-17 00:59:11.545213 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:59:11.545217 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:59:11.545221 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:59:11.545224 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.545228 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.545232 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.545235 | orchestrator | 2026-03-17 00:59:11.545239 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-17 00:59:11.545243 | orchestrator | Tuesday 17 March 2026 00:51:56 +0000 (0:00:00.829) 0:03:16.450 ********* 2026-03-17 00:59:11.545246 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-17 00:59:11.545250 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-03-17 00:59:11.545254 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.545261 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-17 00:59:11.545265 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-03-17 00:59:11.545268 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.545272 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-03-17 00:59:11.545276 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.545279 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-17 00:59:11.545283 | orchestrator | 2026-03-17 00:59:11.545287 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-17 00:59:11.545290 | orchestrator | Tuesday 17 March 2026 00:51:58 +0000 (0:00:02.041) 0:03:18.492 ********* 2026-03-17 00:59:11.545294 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:59:11.545298 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:59:11.545302 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:59:11.545305 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:59:11.545309 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:59:11.545313 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:59:11.545316 | orchestrator | 2026-03-17 00:59:11.545320 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-17 00:59:11.545324 | orchestrator | Tuesday 17 March 2026 00:52:01 +0000 (0:00:02.591) 0:03:21.083 ********* 2026-03-17 00:59:11.545328 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:59:11.545331 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:59:11.545335 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:59:11.545339 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:59:11.545342 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:59:11.545346 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:59:11.545350 | orchestrator | 2026-03-17 00:59:11.545353 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-17 00:59:11.545357 | orchestrator | Tuesday 17 March 2026 00:52:02 +0000 (0:00:01.218) 0:03:22.301 ********* 2026-03-17 00:59:11.545361 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.545364 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.545368 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.545372 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:59:11.545376 | orchestrator | 2026-03-17 00:59:11.545380 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-03-17 00:59:11.545400 | orchestrator | Tuesday 17 March 2026 00:52:03 +0000 (0:00:00.894) 0:03:23.196 ********* 2026-03-17 00:59:11.545404 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:59:11.545408 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:59:11.545412 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:59:11.545415 | orchestrator | 2026-03-17 00:59:11.545422 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-03-17 00:59:11.545426 | orchestrator | Tuesday 17 March 2026 00:52:03 +0000 (0:00:00.361) 0:03:23.558 ********* 2026-03-17 00:59:11.545430 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:59:11.545433 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:59:11.545437 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:59:11.545441 | orchestrator | 2026-03-17 00:59:11.545444 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-03-17 00:59:11.545448 | orchestrator | Tuesday 17 March 2026 00:52:04 +0000 (0:00:01.090) 0:03:24.648 ********* 2026-03-17 00:59:11.545454 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-17 00:59:11.545460 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-17 00:59:11.545467 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-17 00:59:11.545473 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.545479 | orchestrator | 2026-03-17 00:59:11.545486 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-03-17 00:59:11.545492 | orchestrator | Tuesday 17 March 2026 00:52:05 +0000 (0:00:00.620) 0:03:25.269 ********* 2026-03-17 00:59:11.545499 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:59:11.545506 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:59:11.545512 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:59:11.545519 | orchestrator | 2026-03-17 00:59:11.545526 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-17 00:59:11.545533 | orchestrator | Tuesday 17 March 2026 00:52:05 +0000 (0:00:00.261) 0:03:25.530 ********* 2026-03-17 00:59:11.545539 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.545546 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.545552 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.545556 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:59:11.545559 | orchestrator | 2026-03-17 00:59:11.545563 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-03-17 00:59:11.545567 | orchestrator | Tuesday 17 March 2026 00:52:06 +0000 (0:00:00.852) 0:03:26.383 ********* 2026-03-17 00:59:11.545571 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-17 00:59:11.545574 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-17 00:59:11.545578 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-17 00:59:11.545582 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.545586 | orchestrator | 2026-03-17 00:59:11.545589 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-03-17 00:59:11.545593 | orchestrator | Tuesday 17 March 2026 00:52:06 +0000 (0:00:00.417) 0:03:26.800 ********* 2026-03-17 00:59:11.545597 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.545601 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.545604 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.545608 | orchestrator | 2026-03-17 00:59:11.545612 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-03-17 00:59:11.545616 | orchestrator | Tuesday 17 March 2026 00:52:07 +0000 (0:00:00.292) 0:03:27.092 ********* 2026-03-17 00:59:11.545619 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.545623 | orchestrator | 2026-03-17 00:59:11.545627 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-03-17 00:59:11.545631 | orchestrator | Tuesday 17 March 2026 00:52:07 +0000 (0:00:00.557) 0:03:27.649 ********* 2026-03-17 00:59:11.545634 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.545641 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.545645 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.545648 | orchestrator | 2026-03-17 00:59:11.545652 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-03-17 00:59:11.545656 | orchestrator | Tuesday 17 March 2026 00:52:07 +0000 (0:00:00.336) 0:03:27.986 ********* 2026-03-17 00:59:11.545663 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.545667 | orchestrator | 2026-03-17 00:59:11.545672 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-03-17 00:59:11.545676 | orchestrator | Tuesday 17 March 2026 00:52:08 +0000 (0:00:00.246) 0:03:28.233 ********* 2026-03-17 00:59:11.545680 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.545684 | orchestrator | 2026-03-17 00:59:11.545689 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-03-17 00:59:11.545693 | orchestrator | Tuesday 17 March 2026 00:52:08 +0000 (0:00:00.209) 0:03:28.442 ********* 2026-03-17 00:59:11.545697 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.545703 | orchestrator | 2026-03-17 00:59:11.545709 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-03-17 00:59:11.545715 | orchestrator | Tuesday 17 March 2026 00:52:08 +0000 (0:00:00.113) 0:03:28.556 ********* 2026-03-17 00:59:11.545720 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.545725 | orchestrator | 2026-03-17 00:59:11.545731 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-03-17 00:59:11.545736 | orchestrator | Tuesday 17 March 2026 00:52:08 +0000 (0:00:00.210) 0:03:28.766 ********* 2026-03-17 00:59:11.545742 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.545747 | orchestrator | 2026-03-17 00:59:11.545752 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-03-17 00:59:11.545758 | orchestrator | Tuesday 17 March 2026 00:52:08 +0000 (0:00:00.196) 0:03:28.963 ********* 2026-03-17 00:59:11.545763 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-17 00:59:11.545769 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-17 00:59:11.545774 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-17 00:59:11.545779 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.545786 | orchestrator | 2026-03-17 00:59:11.545794 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-03-17 00:59:11.545823 | orchestrator | Tuesday 17 March 2026 00:52:09 +0000 (0:00:00.316) 0:03:29.280 ********* 2026-03-17 00:59:11.545830 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.545836 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.545842 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.545848 | orchestrator | 2026-03-17 00:59:11.545854 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-03-17 00:59:11.545859 | orchestrator | Tuesday 17 March 2026 00:52:09 +0000 (0:00:00.605) 0:03:29.886 ********* 2026-03-17 00:59:11.545866 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.545871 | orchestrator | 2026-03-17 00:59:11.545877 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-03-17 00:59:11.545883 | orchestrator | Tuesday 17 March 2026 00:52:10 +0000 (0:00:00.201) 0:03:30.087 ********* 2026-03-17 00:59:11.545889 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.545896 | orchestrator | 2026-03-17 00:59:11.545902 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-17 00:59:11.545920 | orchestrator | Tuesday 17 March 2026 00:52:10 +0000 (0:00:00.185) 0:03:30.273 ********* 2026-03-17 00:59:11.545927 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.545933 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.545940 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.545946 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:59:11.545952 | orchestrator | 2026-03-17 00:59:11.545958 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-03-17 00:59:11.545964 | orchestrator | Tuesday 17 March 2026 00:52:10 +0000 (0:00:00.704) 0:03:30.978 ********* 2026-03-17 00:59:11.545971 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:59:11.545978 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:59:11.545984 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:59:11.545996 | orchestrator | 2026-03-17 00:59:11.546002 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-03-17 00:59:11.546009 | orchestrator | Tuesday 17 March 2026 00:52:11 +0000 (0:00:00.529) 0:03:31.507 ********* 2026-03-17 00:59:11.546046 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:59:11.546053 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:59:11.546058 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:59:11.546064 | orchestrator | 2026-03-17 00:59:11.546070 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-03-17 00:59:11.546076 | orchestrator | Tuesday 17 March 2026 00:52:13 +0000 (0:00:01.562) 0:03:33.070 ********* 2026-03-17 00:59:11.546083 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-17 00:59:11.546089 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-17 00:59:11.546095 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-17 00:59:11.546102 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.546108 | orchestrator | 2026-03-17 00:59:11.546114 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-03-17 00:59:11.546120 | orchestrator | Tuesday 17 March 2026 00:52:13 +0000 (0:00:00.730) 0:03:33.800 ********* 2026-03-17 00:59:11.546123 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:59:11.546127 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:59:11.546131 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:59:11.546135 | orchestrator | 2026-03-17 00:59:11.546141 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-17 00:59:11.546145 | orchestrator | Tuesday 17 March 2026 00:52:14 +0000 (0:00:00.332) 0:03:34.132 ********* 2026-03-17 00:59:11.546149 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.546153 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.546156 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.546164 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:59:11.546168 | orchestrator | 2026-03-17 00:59:11.546172 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-03-17 00:59:11.546175 | orchestrator | Tuesday 17 March 2026 00:52:15 +0000 (0:00:00.918) 0:03:35.051 ********* 2026-03-17 00:59:11.546179 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:59:11.546183 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:59:11.546186 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:59:11.546190 | orchestrator | 2026-03-17 00:59:11.546194 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-03-17 00:59:11.546198 | orchestrator | Tuesday 17 March 2026 00:52:15 +0000 (0:00:00.372) 0:03:35.423 ********* 2026-03-17 00:59:11.546201 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:59:11.546205 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:59:11.546209 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:59:11.546212 | orchestrator | 2026-03-17 00:59:11.546216 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-03-17 00:59:11.546220 | orchestrator | Tuesday 17 March 2026 00:52:16 +0000 (0:00:01.543) 0:03:36.966 ********* 2026-03-17 00:59:11.546224 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-17 00:59:11.546227 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-17 00:59:11.546231 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-17 00:59:11.546235 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.546238 | orchestrator | 2026-03-17 00:59:11.546242 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-03-17 00:59:11.546246 | orchestrator | Tuesday 17 March 2026 00:52:17 +0000 (0:00:00.463) 0:03:37.429 ********* 2026-03-17 00:59:11.546252 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:59:11.546258 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:59:11.546264 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:59:11.546271 | orchestrator | 2026-03-17 00:59:11.546277 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-03-17 00:59:11.546288 | orchestrator | Tuesday 17 March 2026 00:52:17 +0000 (0:00:00.229) 0:03:37.658 ********* 2026-03-17 00:59:11.546294 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.546301 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.546307 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.546314 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.546320 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.546355 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.546364 | orchestrator | 2026-03-17 00:59:11.546370 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-17 00:59:11.546376 | orchestrator | Tuesday 17 March 2026 00:52:18 +0000 (0:00:00.584) 0:03:38.243 ********* 2026-03-17 00:59:11.546382 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.546388 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.546394 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.546400 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:59:11.546405 | orchestrator | 2026-03-17 00:59:11.546411 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-03-17 00:59:11.546417 | orchestrator | Tuesday 17 March 2026 00:52:19 +0000 (0:00:01.014) 0:03:39.257 ********* 2026-03-17 00:59:11.546422 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:59:11.546428 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:59:11.546434 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:59:11.546439 | orchestrator | 2026-03-17 00:59:11.546445 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-03-17 00:59:11.546451 | orchestrator | Tuesday 17 March 2026 00:52:19 +0000 (0:00:00.231) 0:03:39.489 ********* 2026-03-17 00:59:11.546457 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:59:11.546463 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:59:11.546469 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:59:11.546474 | orchestrator | 2026-03-17 00:59:11.546480 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-03-17 00:59:11.546486 | orchestrator | Tuesday 17 March 2026 00:52:21 +0000 (0:00:01.591) 0:03:41.081 ********* 2026-03-17 00:59:11.546492 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-17 00:59:11.546498 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-17 00:59:11.546503 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-17 00:59:11.546509 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.546515 | orchestrator | 2026-03-17 00:59:11.546521 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-03-17 00:59:11.546526 | orchestrator | Tuesday 17 March 2026 00:52:21 +0000 (0:00:00.664) 0:03:41.745 ********* 2026-03-17 00:59:11.546532 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:59:11.546538 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:59:11.546543 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:59:11.546549 | orchestrator | 2026-03-17 00:59:11.546555 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-03-17 00:59:11.546562 | orchestrator | 2026-03-17 00:59:11.546569 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-17 00:59:11.546575 | orchestrator | Tuesday 17 March 2026 00:52:22 +0000 (0:00:00.555) 0:03:42.301 ********* 2026-03-17 00:59:11.546581 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:59:11.546588 | orchestrator | 2026-03-17 00:59:11.546592 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-17 00:59:11.546596 | orchestrator | Tuesday 17 March 2026 00:52:22 +0000 (0:00:00.588) 0:03:42.890 ********* 2026-03-17 00:59:11.546600 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:59:11.546604 | orchestrator | 2026-03-17 00:59:11.546611 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-17 00:59:11.546624 | orchestrator | Tuesday 17 March 2026 00:52:23 +0000 (0:00:00.495) 0:03:43.386 ********* 2026-03-17 00:59:11.546632 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:59:11.546638 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:59:11.546647 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:59:11.546653 | orchestrator | 2026-03-17 00:59:11.546659 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-17 00:59:11.546665 | orchestrator | Tuesday 17 March 2026 00:52:24 +0000 (0:00:00.945) 0:03:44.331 ********* 2026-03-17 00:59:11.546671 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.546677 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.546684 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.546690 | orchestrator | 2026-03-17 00:59:11.546697 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-17 00:59:11.546703 | orchestrator | Tuesday 17 March 2026 00:52:24 +0000 (0:00:00.259) 0:03:44.590 ********* 2026-03-17 00:59:11.546709 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.546715 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.546722 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.546728 | orchestrator | 2026-03-17 00:59:11.546734 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-17 00:59:11.546741 | orchestrator | Tuesday 17 March 2026 00:52:25 +0000 (0:00:00.459) 0:03:45.050 ********* 2026-03-17 00:59:11.546747 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.546753 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.546760 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.546765 | orchestrator | 2026-03-17 00:59:11.546771 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-17 00:59:11.546777 | orchestrator | Tuesday 17 March 2026 00:52:25 +0000 (0:00:00.594) 0:03:45.645 ********* 2026-03-17 00:59:11.546783 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:59:11.546790 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:59:11.546796 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:59:11.546802 | orchestrator | 2026-03-17 00:59:11.546808 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-17 00:59:11.546815 | orchestrator | Tuesday 17 March 2026 00:52:26 +0000 (0:00:00.857) 0:03:46.502 ********* 2026-03-17 00:59:11.546821 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.546827 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.546833 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.546839 | orchestrator | 2026-03-17 00:59:11.546845 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-17 00:59:11.546852 | orchestrator | Tuesday 17 March 2026 00:52:26 +0000 (0:00:00.288) 0:03:46.791 ********* 2026-03-17 00:59:11.546883 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.546890 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.546897 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.546903 | orchestrator | 2026-03-17 00:59:11.546940 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-17 00:59:11.546946 | orchestrator | Tuesday 17 March 2026 00:52:27 +0000 (0:00:00.434) 0:03:47.226 ********* 2026-03-17 00:59:11.546952 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:59:11.546959 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:59:11.546965 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:59:11.546971 | orchestrator | 2026-03-17 00:59:11.546977 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-17 00:59:11.546983 | orchestrator | Tuesday 17 March 2026 00:52:27 +0000 (0:00:00.676) 0:03:47.902 ********* 2026-03-17 00:59:11.546989 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:59:11.546996 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:59:11.547001 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:59:11.547008 | orchestrator | 2026-03-17 00:59:11.547014 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-17 00:59:11.547021 | orchestrator | Tuesday 17 March 2026 00:52:28 +0000 (0:00:00.648) 0:03:48.551 ********* 2026-03-17 00:59:11.547034 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.547040 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.547046 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.547052 | orchestrator | 2026-03-17 00:59:11.547058 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-17 00:59:11.547065 | orchestrator | Tuesday 17 March 2026 00:52:28 +0000 (0:00:00.262) 0:03:48.813 ********* 2026-03-17 00:59:11.547071 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:59:11.547077 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:59:11.547083 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:59:11.547089 | orchestrator | 2026-03-17 00:59:11.547096 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-17 00:59:11.547102 | orchestrator | Tuesday 17 March 2026 00:52:29 +0000 (0:00:00.437) 0:03:49.250 ********* 2026-03-17 00:59:11.547108 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.547114 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.547121 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.547127 | orchestrator | 2026-03-17 00:59:11.547133 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-17 00:59:11.547138 | orchestrator | Tuesday 17 March 2026 00:52:29 +0000 (0:00:00.280) 0:03:49.530 ********* 2026-03-17 00:59:11.547145 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.547151 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.547157 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.547162 | orchestrator | 2026-03-17 00:59:11.547169 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-17 00:59:11.547175 | orchestrator | Tuesday 17 March 2026 00:52:29 +0000 (0:00:00.307) 0:03:49.838 ********* 2026-03-17 00:59:11.547181 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.547187 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.547193 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.547199 | orchestrator | 2026-03-17 00:59:11.547205 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-17 00:59:11.547211 | orchestrator | Tuesday 17 March 2026 00:52:30 +0000 (0:00:00.304) 0:03:50.142 ********* 2026-03-17 00:59:11.547217 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.547223 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.547229 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.547235 | orchestrator | 2026-03-17 00:59:11.547241 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-17 00:59:11.547247 | orchestrator | Tuesday 17 March 2026 00:52:30 +0000 (0:00:00.388) 0:03:50.530 ********* 2026-03-17 00:59:11.547253 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.547262 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.547277 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.547283 | orchestrator | 2026-03-17 00:59:11.547289 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-17 00:59:11.547296 | orchestrator | Tuesday 17 March 2026 00:52:30 +0000 (0:00:00.233) 0:03:50.764 ********* 2026-03-17 00:59:11.547302 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:59:11.547308 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:59:11.547314 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:59:11.547320 | orchestrator | 2026-03-17 00:59:11.547326 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-17 00:59:11.547335 | orchestrator | Tuesday 17 March 2026 00:52:31 +0000 (0:00:00.301) 0:03:51.066 ********* 2026-03-17 00:59:11.547343 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:59:11.547348 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:59:11.547354 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:59:11.547360 | orchestrator | 2026-03-17 00:59:11.547366 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-17 00:59:11.547373 | orchestrator | Tuesday 17 March 2026 00:52:31 +0000 (0:00:00.294) 0:03:51.361 ********* 2026-03-17 00:59:11.547379 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:59:11.547391 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:59:11.547397 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:59:11.547403 | orchestrator | 2026-03-17 00:59:11.547408 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-03-17 00:59:11.547413 | orchestrator | Tuesday 17 March 2026 00:52:32 +0000 (0:00:00.792) 0:03:52.153 ********* 2026-03-17 00:59:11.547418 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:59:11.547425 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:59:11.547449 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:59:11.547456 | orchestrator | 2026-03-17 00:59:11.547462 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-03-17 00:59:11.547468 | orchestrator | Tuesday 17 March 2026 00:52:32 +0000 (0:00:00.311) 0:03:52.465 ********* 2026-03-17 00:59:11.547478 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:59:11.547485 | orchestrator | 2026-03-17 00:59:11.547491 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-03-17 00:59:11.547497 | orchestrator | Tuesday 17 March 2026 00:52:32 +0000 (0:00:00.517) 0:03:52.982 ********* 2026-03-17 00:59:11.547503 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.547510 | orchestrator | 2026-03-17 00:59:11.547551 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-03-17 00:59:11.547560 | orchestrator | Tuesday 17 March 2026 00:52:33 +0000 (0:00:00.388) 0:03:53.370 ********* 2026-03-17 00:59:11.547566 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-17 00:59:11.547572 | orchestrator | 2026-03-17 00:59:11.547579 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-03-17 00:59:11.547585 | orchestrator | Tuesday 17 March 2026 00:52:34 +0000 (0:00:00.944) 0:03:54.315 ********* 2026-03-17 00:59:11.547594 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:59:11.547602 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:59:11.547607 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:59:11.547614 | orchestrator | 2026-03-17 00:59:11.547621 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-03-17 00:59:11.547627 | orchestrator | Tuesday 17 March 2026 00:52:34 +0000 (0:00:00.311) 0:03:54.627 ********* 2026-03-17 00:59:11.547634 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:59:11.547639 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:59:11.547645 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:59:11.547651 | orchestrator | 2026-03-17 00:59:11.547658 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-03-17 00:59:11.547664 | orchestrator | Tuesday 17 March 2026 00:52:34 +0000 (0:00:00.377) 0:03:55.004 ********* 2026-03-17 00:59:11.547670 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:59:11.547677 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:59:11.547683 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:59:11.547689 | orchestrator | 2026-03-17 00:59:11.547695 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-03-17 00:59:11.547702 | orchestrator | Tuesday 17 March 2026 00:52:36 +0000 (0:00:01.702) 0:03:56.707 ********* 2026-03-17 00:59:11.547708 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:59:11.547714 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:59:11.547720 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:59:11.547728 | orchestrator | 2026-03-17 00:59:11.547738 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-03-17 00:59:11.547744 | orchestrator | Tuesday 17 March 2026 00:52:37 +0000 (0:00:00.919) 0:03:57.626 ********* 2026-03-17 00:59:11.547750 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:59:11.547756 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:59:11.547762 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:59:11.547768 | orchestrator | 2026-03-17 00:59:11.547774 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-03-17 00:59:11.547780 | orchestrator | Tuesday 17 March 2026 00:52:38 +0000 (0:00:00.658) 0:03:58.284 ********* 2026-03-17 00:59:11.547793 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:59:11.547798 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:59:11.547804 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:59:11.547810 | orchestrator | 2026-03-17 00:59:11.547816 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-03-17 00:59:11.547822 | orchestrator | Tuesday 17 March 2026 00:52:38 +0000 (0:00:00.599) 0:03:58.884 ********* 2026-03-17 00:59:11.547827 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:59:11.547833 | orchestrator | 2026-03-17 00:59:11.547839 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-03-17 00:59:11.547846 | orchestrator | Tuesday 17 March 2026 00:52:40 +0000 (0:00:01.862) 0:04:00.747 ********* 2026-03-17 00:59:11.547852 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:59:11.547857 | orchestrator | 2026-03-17 00:59:11.547862 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-03-17 00:59:11.547868 | orchestrator | Tuesday 17 March 2026 00:52:41 +0000 (0:00:00.717) 0:04:01.464 ********* 2026-03-17 00:59:11.547874 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-17 00:59:11.547879 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-17 00:59:11.547889 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-17 00:59:11.547895 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-17 00:59:11.547902 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-03-17 00:59:11.547924 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-17 00:59:11.547929 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-17 00:59:11.547935 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-03-17 00:59:11.547941 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-17 00:59:11.547947 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-03-17 00:59:11.547953 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-03-17 00:59:11.547959 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-03-17 00:59:11.547965 | orchestrator | 2026-03-17 00:59:11.547971 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-03-17 00:59:11.547978 | orchestrator | Tuesday 17 March 2026 00:52:44 +0000 (0:00:03.289) 0:04:04.754 ********* 2026-03-17 00:59:11.547984 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:59:11.547990 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:59:11.547997 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:59:11.548004 | orchestrator | 2026-03-17 00:59:11.548013 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-03-17 00:59:11.548022 | orchestrator | Tuesday 17 March 2026 00:52:46 +0000 (0:00:01.678) 0:04:06.433 ********* 2026-03-17 00:59:11.548029 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:59:11.548035 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:59:11.548041 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:59:11.548047 | orchestrator | 2026-03-17 00:59:11.548058 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-03-17 00:59:11.548073 | orchestrator | Tuesday 17 March 2026 00:52:46 +0000 (0:00:00.391) 0:04:06.824 ********* 2026-03-17 00:59:11.548081 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:59:11.548086 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:59:11.548092 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:59:11.548098 | orchestrator | 2026-03-17 00:59:11.548103 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-03-17 00:59:11.548109 | orchestrator | Tuesday 17 March 2026 00:52:47 +0000 (0:00:00.368) 0:04:07.193 ********* 2026-03-17 00:59:11.548116 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:59:11.548157 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:59:11.548167 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:59:11.548173 | orchestrator | 2026-03-17 00:59:11.548179 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-03-17 00:59:11.548193 | orchestrator | Tuesday 17 March 2026 00:52:49 +0000 (0:00:01.997) 0:04:09.190 ********* 2026-03-17 00:59:11.548199 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:59:11.548206 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:59:11.548212 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:59:11.548218 | orchestrator | 2026-03-17 00:59:11.548225 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-03-17 00:59:11.548231 | orchestrator | Tuesday 17 March 2026 00:52:51 +0000 (0:00:01.882) 0:04:11.073 ********* 2026-03-17 00:59:11.548238 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.548244 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.548251 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.548257 | orchestrator | 2026-03-17 00:59:11.548264 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-03-17 00:59:11.548271 | orchestrator | Tuesday 17 March 2026 00:52:51 +0000 (0:00:00.296) 0:04:11.369 ********* 2026-03-17 00:59:11.548277 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:59:11.548284 | orchestrator | 2026-03-17 00:59:11.548290 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-03-17 00:59:11.548296 | orchestrator | Tuesday 17 March 2026 00:52:51 +0000 (0:00:00.659) 0:04:12.028 ********* 2026-03-17 00:59:11.548303 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.548309 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.548315 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.548321 | orchestrator | 2026-03-17 00:59:11.548328 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-03-17 00:59:11.548334 | orchestrator | Tuesday 17 March 2026 00:52:52 +0000 (0:00:00.632) 0:04:12.661 ********* 2026-03-17 00:59:11.548340 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.548346 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.548353 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.548359 | orchestrator | 2026-03-17 00:59:11.548365 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-03-17 00:59:11.548371 | orchestrator | Tuesday 17 March 2026 00:52:53 +0000 (0:00:00.405) 0:04:13.066 ********* 2026-03-17 00:59:11.548378 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:59:11.548384 | orchestrator | 2026-03-17 00:59:11.548390 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-03-17 00:59:11.548396 | orchestrator | Tuesday 17 March 2026 00:52:53 +0000 (0:00:00.580) 0:04:13.646 ********* 2026-03-17 00:59:11.548403 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:59:11.548409 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:59:11.548415 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:59:11.548422 | orchestrator | 2026-03-17 00:59:11.548428 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-03-17 00:59:11.548434 | orchestrator | Tuesday 17 March 2026 00:52:55 +0000 (0:00:01.759) 0:04:15.406 ********* 2026-03-17 00:59:11.548440 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:59:11.548447 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:59:11.548453 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:59:11.548459 | orchestrator | 2026-03-17 00:59:11.548467 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-03-17 00:59:11.548473 | orchestrator | Tuesday 17 March 2026 00:52:56 +0000 (0:00:01.265) 0:04:16.671 ********* 2026-03-17 00:59:11.548480 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:59:11.548490 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:59:11.548497 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:59:11.548503 | orchestrator | 2026-03-17 00:59:11.548509 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-03-17 00:59:11.548516 | orchestrator | Tuesday 17 March 2026 00:52:58 +0000 (0:00:01.820) 0:04:18.492 ********* 2026-03-17 00:59:11.548522 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:59:11.548532 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:59:11.548538 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:59:11.548544 | orchestrator | 2026-03-17 00:59:11.548551 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-03-17 00:59:11.548557 | orchestrator | Tuesday 17 March 2026 00:53:00 +0000 (0:00:01.818) 0:04:20.310 ********* 2026-03-17 00:59:11.548563 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:59:11.548570 | orchestrator | 2026-03-17 00:59:11.548576 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-03-17 00:59:11.548582 | orchestrator | Tuesday 17 March 2026 00:53:00 +0000 (0:00:00.680) 0:04:20.991 ********* 2026-03-17 00:59:11.548588 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:59:11.548595 | orchestrator | 2026-03-17 00:59:11.548601 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-03-17 00:59:11.548607 | orchestrator | Tuesday 17 March 2026 00:53:02 +0000 (0:00:01.101) 0:04:22.093 ********* 2026-03-17 00:59:11.548613 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:59:11.548620 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:59:11.548626 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:59:11.548632 | orchestrator | 2026-03-17 00:59:11.548638 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-03-17 00:59:11.548676 | orchestrator | Tuesday 17 March 2026 00:53:10 +0000 (0:00:08.618) 0:04:30.712 ********* 2026-03-17 00:59:11.548683 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.548689 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.548696 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.548702 | orchestrator | 2026-03-17 00:59:11.548708 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-03-17 00:59:11.548714 | orchestrator | Tuesday 17 March 2026 00:53:10 +0000 (0:00:00.270) 0:04:30.983 ********* 2026-03-17 00:59:11.548745 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__a449a48f09860ddc8bcbd8dc1d44d82f3b9cc0c2'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-03-17 00:59:11.548753 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__a449a48f09860ddc8bcbd8dc1d44d82f3b9cc0c2'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-03-17 00:59:11.548760 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__a449a48f09860ddc8bcbd8dc1d44d82f3b9cc0c2'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-03-17 00:59:11.548768 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__a449a48f09860ddc8bcbd8dc1d44d82f3b9cc0c2'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-03-17 00:59:11.548775 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__a449a48f09860ddc8bcbd8dc1d44d82f3b9cc0c2'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-03-17 00:59:11.548782 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__a449a48f09860ddc8bcbd8dc1d44d82f3b9cc0c2'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__a449a48f09860ddc8bcbd8dc1d44d82f3b9cc0c2'}])  2026-03-17 00:59:11.548795 | orchestrator | 2026-03-17 00:59:11.548802 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-17 00:59:11.548809 | orchestrator | Tuesday 17 March 2026 00:53:23 +0000 (0:00:13.004) 0:04:43.987 ********* 2026-03-17 00:59:11.548815 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.548822 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.548831 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.548838 | orchestrator | 2026-03-17 00:59:11.548843 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-17 00:59:11.548850 | orchestrator | Tuesday 17 March 2026 00:53:24 +0000 (0:00:00.403) 0:04:44.390 ********* 2026-03-17 00:59:11.548856 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:59:11.548862 | orchestrator | 2026-03-17 00:59:11.548869 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-03-17 00:59:11.548875 | orchestrator | Tuesday 17 March 2026 00:53:24 +0000 (0:00:00.614) 0:04:45.005 ********* 2026-03-17 00:59:11.548881 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:59:11.548887 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:59:11.548894 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:59:11.548900 | orchestrator | 2026-03-17 00:59:11.548922 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-03-17 00:59:11.548929 | orchestrator | Tuesday 17 March 2026 00:53:25 +0000 (0:00:00.622) 0:04:45.628 ********* 2026-03-17 00:59:11.548935 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.548941 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.548947 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.548954 | orchestrator | 2026-03-17 00:59:11.548960 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-03-17 00:59:11.548966 | orchestrator | Tuesday 17 March 2026 00:53:25 +0000 (0:00:00.325) 0:04:45.953 ********* 2026-03-17 00:59:11.548972 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-17 00:59:11.548980 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-17 00:59:11.548986 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-17 00:59:11.548992 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.548998 | orchestrator | 2026-03-17 00:59:11.549005 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-03-17 00:59:11.549011 | orchestrator | Tuesday 17 March 2026 00:53:26 +0000 (0:00:00.612) 0:04:46.566 ********* 2026-03-17 00:59:11.549017 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:59:11.549023 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:59:11.549029 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:59:11.549034 | orchestrator | 2026-03-17 00:59:11.549038 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-03-17 00:59:11.549042 | orchestrator | 2026-03-17 00:59:11.549068 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-17 00:59:11.549076 | orchestrator | Tuesday 17 March 2026 00:53:27 +0000 (0:00:01.018) 0:04:47.584 ********* 2026-03-17 00:59:11.549083 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:59:11.549089 | orchestrator | 2026-03-17 00:59:11.549094 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-17 00:59:11.549101 | orchestrator | Tuesday 17 March 2026 00:53:28 +0000 (0:00:00.531) 0:04:48.116 ********* 2026-03-17 00:59:11.549106 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:59:11.549117 | orchestrator | 2026-03-17 00:59:11.549148 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-17 00:59:11.549155 | orchestrator | Tuesday 17 March 2026 00:53:28 +0000 (0:00:00.508) 0:04:48.624 ********* 2026-03-17 00:59:11.549161 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:59:11.549167 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:59:11.549173 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:59:11.549179 | orchestrator | 2026-03-17 00:59:11.549185 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-17 00:59:11.549194 | orchestrator | Tuesday 17 March 2026 00:53:29 +0000 (0:00:00.979) 0:04:49.604 ********* 2026-03-17 00:59:11.549200 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.549206 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.549212 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.549218 | orchestrator | 2026-03-17 00:59:11.549224 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-17 00:59:11.549230 | orchestrator | Tuesday 17 March 2026 00:53:29 +0000 (0:00:00.341) 0:04:49.946 ********* 2026-03-17 00:59:11.549236 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.549243 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.549261 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.549265 | orchestrator | 2026-03-17 00:59:11.549269 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-17 00:59:11.549273 | orchestrator | Tuesday 17 March 2026 00:53:30 +0000 (0:00:00.323) 0:04:50.269 ********* 2026-03-17 00:59:11.549276 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.549280 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.549284 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.549288 | orchestrator | 2026-03-17 00:59:11.549291 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-17 00:59:11.549295 | orchestrator | Tuesday 17 March 2026 00:53:30 +0000 (0:00:00.319) 0:04:50.589 ********* 2026-03-17 00:59:11.549299 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:59:11.549303 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:59:11.549306 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:59:11.549310 | orchestrator | 2026-03-17 00:59:11.549314 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-17 00:59:11.549318 | orchestrator | Tuesday 17 March 2026 00:53:31 +0000 (0:00:01.121) 0:04:51.711 ********* 2026-03-17 00:59:11.549321 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.549325 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.549329 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.549332 | orchestrator | 2026-03-17 00:59:11.549336 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-17 00:59:11.549340 | orchestrator | Tuesday 17 March 2026 00:53:31 +0000 (0:00:00.305) 0:04:52.016 ********* 2026-03-17 00:59:11.549344 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.549347 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.549351 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.549355 | orchestrator | 2026-03-17 00:59:11.549362 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-17 00:59:11.549366 | orchestrator | Tuesday 17 March 2026 00:53:32 +0000 (0:00:00.254) 0:04:52.271 ********* 2026-03-17 00:59:11.549370 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:59:11.549374 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:59:11.549378 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:59:11.549381 | orchestrator | 2026-03-17 00:59:11.549385 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-17 00:59:11.549389 | orchestrator | Tuesday 17 March 2026 00:53:32 +0000 (0:00:00.752) 0:04:53.023 ********* 2026-03-17 00:59:11.549393 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:59:11.549396 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:59:11.549400 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:59:11.549404 | orchestrator | 2026-03-17 00:59:11.549408 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-17 00:59:11.549415 | orchestrator | Tuesday 17 March 2026 00:53:33 +0000 (0:00:00.892) 0:04:53.915 ********* 2026-03-17 00:59:11.549419 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.549423 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.549426 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.549430 | orchestrator | 2026-03-17 00:59:11.549434 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-17 00:59:11.549438 | orchestrator | Tuesday 17 March 2026 00:53:34 +0000 (0:00:00.276) 0:04:54.192 ********* 2026-03-17 00:59:11.549441 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:59:11.549445 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:59:11.549449 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:59:11.549453 | orchestrator | 2026-03-17 00:59:11.549456 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-17 00:59:11.549460 | orchestrator | Tuesday 17 March 2026 00:53:34 +0000 (0:00:00.280) 0:04:54.473 ********* 2026-03-17 00:59:11.549464 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.549468 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.549471 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.549475 | orchestrator | 2026-03-17 00:59:11.549479 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-17 00:59:11.549482 | orchestrator | Tuesday 17 March 2026 00:53:34 +0000 (0:00:00.270) 0:04:54.743 ********* 2026-03-17 00:59:11.549486 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.549490 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.549512 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.549517 | orchestrator | 2026-03-17 00:59:11.549521 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-17 00:59:11.549524 | orchestrator | Tuesday 17 March 2026 00:53:35 +0000 (0:00:00.429) 0:04:55.173 ********* 2026-03-17 00:59:11.549528 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.549532 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.549535 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.549539 | orchestrator | 2026-03-17 00:59:11.549543 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-17 00:59:11.549547 | orchestrator | Tuesday 17 March 2026 00:53:35 +0000 (0:00:00.336) 0:04:55.510 ********* 2026-03-17 00:59:11.549550 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.549554 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.549558 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.549562 | orchestrator | 2026-03-17 00:59:11.549565 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-17 00:59:11.549569 | orchestrator | Tuesday 17 March 2026 00:53:35 +0000 (0:00:00.357) 0:04:55.867 ********* 2026-03-17 00:59:11.549573 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.549576 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.549580 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.549584 | orchestrator | 2026-03-17 00:59:11.549588 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-17 00:59:11.549591 | orchestrator | Tuesday 17 March 2026 00:53:36 +0000 (0:00:00.321) 0:04:56.189 ********* 2026-03-17 00:59:11.549595 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:59:11.549599 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:59:11.549603 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:59:11.549606 | orchestrator | 2026-03-17 00:59:11.549610 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-17 00:59:11.549614 | orchestrator | Tuesday 17 March 2026 00:53:36 +0000 (0:00:00.356) 0:04:56.545 ********* 2026-03-17 00:59:11.549618 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:59:11.549621 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:59:11.549625 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:59:11.549629 | orchestrator | 2026-03-17 00:59:11.549632 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-17 00:59:11.549636 | orchestrator | Tuesday 17 March 2026 00:53:37 +0000 (0:00:00.648) 0:04:57.194 ********* 2026-03-17 00:59:11.549643 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:59:11.549646 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:59:11.549650 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:59:11.549654 | orchestrator | 2026-03-17 00:59:11.549658 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-03-17 00:59:11.549661 | orchestrator | Tuesday 17 March 2026 00:53:37 +0000 (0:00:00.619) 0:04:57.813 ********* 2026-03-17 00:59:11.549665 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-17 00:59:11.549669 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-17 00:59:11.549673 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-17 00:59:11.549676 | orchestrator | 2026-03-17 00:59:11.549680 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-03-17 00:59:11.549684 | orchestrator | Tuesday 17 March 2026 00:53:38 +0000 (0:00:00.941) 0:04:58.754 ********* 2026-03-17 00:59:11.549688 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:59:11.549691 | orchestrator | 2026-03-17 00:59:11.549695 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-03-17 00:59:11.549699 | orchestrator | Tuesday 17 March 2026 00:53:39 +0000 (0:00:00.786) 0:04:59.541 ********* 2026-03-17 00:59:11.549703 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:59:11.549708 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:59:11.549712 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:59:11.549716 | orchestrator | 2026-03-17 00:59:11.549720 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-03-17 00:59:11.549724 | orchestrator | Tuesday 17 March 2026 00:53:40 +0000 (0:00:00.720) 0:05:00.261 ********* 2026-03-17 00:59:11.549727 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.549731 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.549735 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.549739 | orchestrator | 2026-03-17 00:59:11.549742 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-03-17 00:59:11.549746 | orchestrator | Tuesday 17 March 2026 00:53:40 +0000 (0:00:00.327) 0:05:00.589 ********* 2026-03-17 00:59:11.549750 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-17 00:59:11.549754 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-17 00:59:11.549757 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-17 00:59:11.549761 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-03-17 00:59:11.549765 | orchestrator | 2026-03-17 00:59:11.549768 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-03-17 00:59:11.549772 | orchestrator | Tuesday 17 March 2026 00:53:50 +0000 (0:00:10.339) 0:05:10.928 ********* 2026-03-17 00:59:11.549776 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:59:11.549780 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:59:11.549783 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:59:11.549787 | orchestrator | 2026-03-17 00:59:11.549791 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-03-17 00:59:11.549794 | orchestrator | Tuesday 17 March 2026 00:53:51 +0000 (0:00:00.690) 0:05:11.619 ********* 2026-03-17 00:59:11.549798 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-17 00:59:11.549802 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-17 00:59:11.549805 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-17 00:59:11.549809 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-17 00:59:11.549813 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-17 00:59:11.549817 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-17 00:59:11.549820 | orchestrator | 2026-03-17 00:59:11.549834 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-03-17 00:59:11.549842 | orchestrator | Tuesday 17 March 2026 00:53:53 +0000 (0:00:02.057) 0:05:13.676 ********* 2026-03-17 00:59:11.549845 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-17 00:59:11.549849 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-17 00:59:11.549853 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-17 00:59:11.549857 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-17 00:59:11.549860 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-03-17 00:59:11.549864 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-03-17 00:59:11.549868 | orchestrator | 2026-03-17 00:59:11.549871 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-03-17 00:59:11.549875 | orchestrator | Tuesday 17 March 2026 00:53:54 +0000 (0:00:01.226) 0:05:14.903 ********* 2026-03-17 00:59:11.549879 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:59:11.549883 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:59:11.549886 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:59:11.549890 | orchestrator | 2026-03-17 00:59:11.549894 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-03-17 00:59:11.549897 | orchestrator | Tuesday 17 March 2026 00:53:55 +0000 (0:00:00.675) 0:05:15.579 ********* 2026-03-17 00:59:11.549901 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.549916 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.549922 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.549928 | orchestrator | 2026-03-17 00:59:11.549934 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-03-17 00:59:11.549939 | orchestrator | Tuesday 17 March 2026 00:53:55 +0000 (0:00:00.287) 0:05:15.870 ********* 2026-03-17 00:59:11.549945 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.549951 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.549956 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.549962 | orchestrator | 2026-03-17 00:59:11.549969 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-03-17 00:59:11.549973 | orchestrator | Tuesday 17 March 2026 00:53:56 +0000 (0:00:00.596) 0:05:16.466 ********* 2026-03-17 00:59:11.549977 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:59:11.549980 | orchestrator | 2026-03-17 00:59:11.549984 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-03-17 00:59:11.549988 | orchestrator | Tuesday 17 March 2026 00:53:56 +0000 (0:00:00.541) 0:05:17.009 ********* 2026-03-17 00:59:11.549992 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.549995 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.549999 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.550003 | orchestrator | 2026-03-17 00:59:11.550006 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-03-17 00:59:11.550010 | orchestrator | Tuesday 17 March 2026 00:53:57 +0000 (0:00:00.413) 0:05:17.422 ********* 2026-03-17 00:59:11.550041 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.550045 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.550049 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.550053 | orchestrator | 2026-03-17 00:59:11.550057 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-03-17 00:59:11.550060 | orchestrator | Tuesday 17 March 2026 00:53:58 +0000 (0:00:00.724) 0:05:18.146 ********* 2026-03-17 00:59:11.550064 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:59:11.550068 | orchestrator | 2026-03-17 00:59:11.550071 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-03-17 00:59:11.550077 | orchestrator | Tuesday 17 March 2026 00:53:58 +0000 (0:00:00.593) 0:05:18.740 ********* 2026-03-17 00:59:11.550091 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:59:11.550102 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:59:11.550108 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:59:11.550114 | orchestrator | 2026-03-17 00:59:11.550126 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-03-17 00:59:11.550132 | orchestrator | Tuesday 17 March 2026 00:53:59 +0000 (0:00:01.207) 0:05:19.947 ********* 2026-03-17 00:59:11.550138 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:59:11.550144 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:59:11.550150 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:59:11.550156 | orchestrator | 2026-03-17 00:59:11.550163 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-03-17 00:59:11.550168 | orchestrator | Tuesday 17 March 2026 00:54:01 +0000 (0:00:01.639) 0:05:21.586 ********* 2026-03-17 00:59:11.550175 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:59:11.550182 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:59:11.550188 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:59:11.550195 | orchestrator | 2026-03-17 00:59:11.550201 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-03-17 00:59:11.550208 | orchestrator | Tuesday 17 March 2026 00:54:03 +0000 (0:00:02.043) 0:05:23.629 ********* 2026-03-17 00:59:11.550214 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:59:11.550220 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:59:11.550227 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:59:11.550231 | orchestrator | 2026-03-17 00:59:11.550234 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-03-17 00:59:11.550238 | orchestrator | Tuesday 17 March 2026 00:54:05 +0000 (0:00:02.227) 0:05:25.857 ********* 2026-03-17 00:59:11.550242 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.550246 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.550249 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-03-17 00:59:11.550253 | orchestrator | 2026-03-17 00:59:11.550257 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-03-17 00:59:11.550261 | orchestrator | Tuesday 17 March 2026 00:54:06 +0000 (0:00:00.418) 0:05:26.276 ********* 2026-03-17 00:59:11.550264 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-03-17 00:59:11.550286 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-03-17 00:59:11.550291 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2026-03-17 00:59:11.550295 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2026-03-17 00:59:11.550299 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2026-03-17 00:59:11.550302 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (25 retries left). 2026-03-17 00:59:11.550306 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-17 00:59:11.550310 | orchestrator | 2026-03-17 00:59:11.550314 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-03-17 00:59:11.550317 | orchestrator | Tuesday 17 March 2026 00:54:42 +0000 (0:00:36.726) 0:06:03.002 ********* 2026-03-17 00:59:11.550321 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-17 00:59:11.550325 | orchestrator | 2026-03-17 00:59:11.550329 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-03-17 00:59:11.550332 | orchestrator | Tuesday 17 March 2026 00:54:44 +0000 (0:00:01.342) 0:06:04.345 ********* 2026-03-17 00:59:11.550336 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:59:11.550340 | orchestrator | 2026-03-17 00:59:11.550343 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-03-17 00:59:11.550347 | orchestrator | Tuesday 17 March 2026 00:54:44 +0000 (0:00:00.298) 0:06:04.643 ********* 2026-03-17 00:59:11.550351 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:59:11.550355 | orchestrator | 2026-03-17 00:59:11.550358 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-03-17 00:59:11.550366 | orchestrator | Tuesday 17 March 2026 00:54:44 +0000 (0:00:00.144) 0:06:04.788 ********* 2026-03-17 00:59:11.550370 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-03-17 00:59:11.550374 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-03-17 00:59:11.550378 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-03-17 00:59:11.550381 | orchestrator | 2026-03-17 00:59:11.550385 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-03-17 00:59:11.550389 | orchestrator | Tuesday 17 March 2026 00:54:51 +0000 (0:00:06.394) 0:06:11.183 ********* 2026-03-17 00:59:11.550393 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-03-17 00:59:11.550397 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-03-17 00:59:11.550400 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-03-17 00:59:11.550404 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-03-17 00:59:11.550408 | orchestrator | 2026-03-17 00:59:11.550412 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-17 00:59:11.550415 | orchestrator | Tuesday 17 March 2026 00:54:55 +0000 (0:00:04.567) 0:06:15.750 ********* 2026-03-17 00:59:11.550419 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:59:11.550423 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:59:11.550427 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:59:11.550430 | orchestrator | 2026-03-17 00:59:11.550434 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-17 00:59:11.550438 | orchestrator | Tuesday 17 March 2026 00:54:56 +0000 (0:00:00.808) 0:06:16.558 ********* 2026-03-17 00:59:11.550444 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:59:11.550448 | orchestrator | 2026-03-17 00:59:11.550452 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-03-17 00:59:11.550456 | orchestrator | Tuesday 17 March 2026 00:54:56 +0000 (0:00:00.451) 0:06:17.009 ********* 2026-03-17 00:59:11.550459 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:59:11.550463 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:59:11.550467 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:59:11.550471 | orchestrator | 2026-03-17 00:59:11.550475 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-03-17 00:59:11.550478 | orchestrator | Tuesday 17 March 2026 00:54:57 +0000 (0:00:00.280) 0:06:17.290 ********* 2026-03-17 00:59:11.550482 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:59:11.550486 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:59:11.550490 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:59:11.550493 | orchestrator | 2026-03-17 00:59:11.550497 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-03-17 00:59:11.550501 | orchestrator | Tuesday 17 March 2026 00:54:58 +0000 (0:00:01.514) 0:06:18.804 ********* 2026-03-17 00:59:11.550505 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-17 00:59:11.550508 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-17 00:59:11.550512 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-17 00:59:11.550516 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.550519 | orchestrator | 2026-03-17 00:59:11.550523 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-03-17 00:59:11.550527 | orchestrator | Tuesday 17 March 2026 00:54:59 +0000 (0:00:00.574) 0:06:19.379 ********* 2026-03-17 00:59:11.550531 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:59:11.550534 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:59:11.550538 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:59:11.550542 | orchestrator | 2026-03-17 00:59:11.550546 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-03-17 00:59:11.550549 | orchestrator | 2026-03-17 00:59:11.550553 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-17 00:59:11.550560 | orchestrator | Tuesday 17 March 2026 00:54:59 +0000 (0:00:00.500) 0:06:19.880 ********* 2026-03-17 00:59:11.550576 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:59:11.550580 | orchestrator | 2026-03-17 00:59:11.550584 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-17 00:59:11.550588 | orchestrator | Tuesday 17 March 2026 00:55:00 +0000 (0:00:00.583) 0:06:20.463 ********* 2026-03-17 00:59:11.550592 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:59:11.550595 | orchestrator | 2026-03-17 00:59:11.550599 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-17 00:59:11.550603 | orchestrator | Tuesday 17 March 2026 00:55:00 +0000 (0:00:00.474) 0:06:20.938 ********* 2026-03-17 00:59:11.550607 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.550610 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.550614 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.550618 | orchestrator | 2026-03-17 00:59:11.550622 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-17 00:59:11.550625 | orchestrator | Tuesday 17 March 2026 00:55:01 +0000 (0:00:00.238) 0:06:21.176 ********* 2026-03-17 00:59:11.550629 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:59:11.550633 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:59:11.550636 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:59:11.550640 | orchestrator | 2026-03-17 00:59:11.550644 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-17 00:59:11.550648 | orchestrator | Tuesday 17 March 2026 00:55:02 +0000 (0:00:00.893) 0:06:22.070 ********* 2026-03-17 00:59:11.550651 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:59:11.550655 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:59:11.550659 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:59:11.550662 | orchestrator | 2026-03-17 00:59:11.550666 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-17 00:59:11.550670 | orchestrator | Tuesday 17 March 2026 00:55:02 +0000 (0:00:00.702) 0:06:22.773 ********* 2026-03-17 00:59:11.550673 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:59:11.550677 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:59:11.550681 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:59:11.550684 | orchestrator | 2026-03-17 00:59:11.550688 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-17 00:59:11.550692 | orchestrator | Tuesday 17 March 2026 00:55:03 +0000 (0:00:00.603) 0:06:23.376 ********* 2026-03-17 00:59:11.550696 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.550699 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.550703 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.550707 | orchestrator | 2026-03-17 00:59:11.550711 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-17 00:59:11.550714 | orchestrator | Tuesday 17 March 2026 00:55:03 +0000 (0:00:00.282) 0:06:23.658 ********* 2026-03-17 00:59:11.550718 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.550722 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.550726 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.550729 | orchestrator | 2026-03-17 00:59:11.550733 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-17 00:59:11.550737 | orchestrator | Tuesday 17 March 2026 00:55:04 +0000 (0:00:00.457) 0:06:24.116 ********* 2026-03-17 00:59:11.550740 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.550744 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.550748 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.550751 | orchestrator | 2026-03-17 00:59:11.550755 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-17 00:59:11.550759 | orchestrator | Tuesday 17 March 2026 00:55:04 +0000 (0:00:00.256) 0:06:24.372 ********* 2026-03-17 00:59:11.550766 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:59:11.550769 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:59:11.550776 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:59:11.550779 | orchestrator | 2026-03-17 00:59:11.550783 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-17 00:59:11.550787 | orchestrator | Tuesday 17 March 2026 00:55:04 +0000 (0:00:00.673) 0:06:25.046 ********* 2026-03-17 00:59:11.550791 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:59:11.550794 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:59:11.550798 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:59:11.550802 | orchestrator | 2026-03-17 00:59:11.550806 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-17 00:59:11.550809 | orchestrator | Tuesday 17 March 2026 00:55:05 +0000 (0:00:00.653) 0:06:25.699 ********* 2026-03-17 00:59:11.550813 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.550817 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.550821 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.550824 | orchestrator | 2026-03-17 00:59:11.550828 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-17 00:59:11.550832 | orchestrator | Tuesday 17 March 2026 00:55:06 +0000 (0:00:00.458) 0:06:26.157 ********* 2026-03-17 00:59:11.550835 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.550839 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.550843 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.550846 | orchestrator | 2026-03-17 00:59:11.550850 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-17 00:59:11.550854 | orchestrator | Tuesday 17 March 2026 00:55:06 +0000 (0:00:00.258) 0:06:26.416 ********* 2026-03-17 00:59:11.550858 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:59:11.550861 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:59:11.550865 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:59:11.550869 | orchestrator | 2026-03-17 00:59:11.550872 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-17 00:59:11.550876 | orchestrator | Tuesday 17 March 2026 00:55:06 +0000 (0:00:00.270) 0:06:26.686 ********* 2026-03-17 00:59:11.550880 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:59:11.550884 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:59:11.550887 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:59:11.550891 | orchestrator | 2026-03-17 00:59:11.550895 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-17 00:59:11.550898 | orchestrator | Tuesday 17 March 2026 00:55:06 +0000 (0:00:00.269) 0:06:26.956 ********* 2026-03-17 00:59:11.550902 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:59:11.550935 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:59:11.550942 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:59:11.550946 | orchestrator | 2026-03-17 00:59:11.550949 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-17 00:59:11.550953 | orchestrator | Tuesday 17 March 2026 00:55:07 +0000 (0:00:00.454) 0:06:27.410 ********* 2026-03-17 00:59:11.550957 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.550963 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.550970 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.550976 | orchestrator | 2026-03-17 00:59:11.550987 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-17 00:59:11.550995 | orchestrator | Tuesday 17 March 2026 00:55:07 +0000 (0:00:00.257) 0:06:27.667 ********* 2026-03-17 00:59:11.551001 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.551008 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.551014 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.551020 | orchestrator | 2026-03-17 00:59:11.551026 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-17 00:59:11.551032 | orchestrator | Tuesday 17 March 2026 00:55:07 +0000 (0:00:00.251) 0:06:27.919 ********* 2026-03-17 00:59:11.551038 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.551044 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.551056 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.551063 | orchestrator | 2026-03-17 00:59:11.551070 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-17 00:59:11.551077 | orchestrator | Tuesday 17 March 2026 00:55:08 +0000 (0:00:00.277) 0:06:28.197 ********* 2026-03-17 00:59:11.551083 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:59:11.551090 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:59:11.551096 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:59:11.551103 | orchestrator | 2026-03-17 00:59:11.551108 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-17 00:59:11.551111 | orchestrator | Tuesday 17 March 2026 00:55:08 +0000 (0:00:00.475) 0:06:28.672 ********* 2026-03-17 00:59:11.551115 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:59:11.551119 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:59:11.551123 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:59:11.551126 | orchestrator | 2026-03-17 00:59:11.551130 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-03-17 00:59:11.551134 | orchestrator | Tuesday 17 March 2026 00:55:09 +0000 (0:00:00.456) 0:06:29.129 ********* 2026-03-17 00:59:11.551137 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:59:11.551141 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:59:11.551145 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:59:11.551149 | orchestrator | 2026-03-17 00:59:11.551152 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-03-17 00:59:11.551156 | orchestrator | Tuesday 17 March 2026 00:55:09 +0000 (0:00:00.294) 0:06:29.424 ********* 2026-03-17 00:59:11.551160 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-17 00:59:11.551164 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-17 00:59:11.551167 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-17 00:59:11.551171 | orchestrator | 2026-03-17 00:59:11.551175 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-03-17 00:59:11.551179 | orchestrator | Tuesday 17 March 2026 00:55:10 +0000 (0:00:00.917) 0:06:30.341 ********* 2026-03-17 00:59:11.551182 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:59:11.551186 | orchestrator | 2026-03-17 00:59:11.551190 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-03-17 00:59:11.551193 | orchestrator | Tuesday 17 March 2026 00:55:11 +0000 (0:00:00.841) 0:06:31.182 ********* 2026-03-17 00:59:11.551200 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.551204 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.551207 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.551211 | orchestrator | 2026-03-17 00:59:11.551216 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-03-17 00:59:11.551222 | orchestrator | Tuesday 17 March 2026 00:55:11 +0000 (0:00:00.295) 0:06:31.478 ********* 2026-03-17 00:59:11.551231 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.551238 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.551244 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.551250 | orchestrator | 2026-03-17 00:59:11.551256 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-03-17 00:59:11.551262 | orchestrator | Tuesday 17 March 2026 00:55:11 +0000 (0:00:00.300) 0:06:31.778 ********* 2026-03-17 00:59:11.551268 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:59:11.551273 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:59:11.551279 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:59:11.551285 | orchestrator | 2026-03-17 00:59:11.551290 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-03-17 00:59:11.551296 | orchestrator | Tuesday 17 March 2026 00:55:12 +0000 (0:00:00.976) 0:06:32.755 ********* 2026-03-17 00:59:11.551302 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:59:11.551307 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:59:11.551319 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:59:11.551326 | orchestrator | 2026-03-17 00:59:11.551332 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-03-17 00:59:11.551338 | orchestrator | Tuesday 17 March 2026 00:55:13 +0000 (0:00:00.389) 0:06:33.144 ********* 2026-03-17 00:59:11.551344 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-17 00:59:11.551350 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-17 00:59:11.551356 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-17 00:59:11.551361 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-17 00:59:11.551367 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-17 00:59:11.551382 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-17 00:59:11.551392 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-17 00:59:11.551400 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-17 00:59:11.551405 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-17 00:59:11.551411 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-17 00:59:11.551417 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-17 00:59:11.551423 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-17 00:59:11.551429 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-17 00:59:11.551434 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-17 00:59:11.551439 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-17 00:59:11.551445 | orchestrator | 2026-03-17 00:59:11.551451 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-03-17 00:59:11.551457 | orchestrator | Tuesday 17 March 2026 00:55:17 +0000 (0:00:04.109) 0:06:37.253 ********* 2026-03-17 00:59:11.551463 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.551469 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.551475 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.551481 | orchestrator | 2026-03-17 00:59:11.551487 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-03-17 00:59:11.551495 | orchestrator | Tuesday 17 March 2026 00:55:17 +0000 (0:00:00.280) 0:06:37.534 ********* 2026-03-17 00:59:11.551499 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:59:11.551502 | orchestrator | 2026-03-17 00:59:11.551506 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-03-17 00:59:11.551510 | orchestrator | Tuesday 17 March 2026 00:55:18 +0000 (0:00:00.807) 0:06:38.341 ********* 2026-03-17 00:59:11.551514 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-17 00:59:11.551517 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-17 00:59:11.551521 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-17 00:59:11.551525 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-03-17 00:59:11.551529 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-03-17 00:59:11.551533 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-03-17 00:59:11.551537 | orchestrator | 2026-03-17 00:59:11.551540 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-03-17 00:59:11.551544 | orchestrator | Tuesday 17 March 2026 00:55:19 +0000 (0:00:01.115) 0:06:39.457 ********* 2026-03-17 00:59:11.551552 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-17 00:59:11.551556 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-17 00:59:11.551560 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-17 00:59:11.551564 | orchestrator | 2026-03-17 00:59:11.551567 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-03-17 00:59:11.551575 | orchestrator | Tuesday 17 March 2026 00:55:21 +0000 (0:00:02.037) 0:06:41.495 ********* 2026-03-17 00:59:11.551578 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-17 00:59:11.551582 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-17 00:59:11.551586 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-17 00:59:11.551590 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:59:11.551593 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-17 00:59:11.551597 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:59:11.551603 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-17 00:59:11.551609 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-17 00:59:11.551619 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:59:11.551625 | orchestrator | 2026-03-17 00:59:11.551631 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-03-17 00:59:11.551637 | orchestrator | Tuesday 17 March 2026 00:55:22 +0000 (0:00:01.353) 0:06:42.849 ********* 2026-03-17 00:59:11.551644 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-17 00:59:11.551649 | orchestrator | 2026-03-17 00:59:11.551654 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-03-17 00:59:11.551659 | orchestrator | Tuesday 17 March 2026 00:55:24 +0000 (0:00:01.880) 0:06:44.730 ********* 2026-03-17 00:59:11.551665 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:59:11.551671 | orchestrator | 2026-03-17 00:59:11.551676 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-03-17 00:59:11.551682 | orchestrator | Tuesday 17 March 2026 00:55:25 +0000 (0:00:00.481) 0:06:45.211 ********* 2026-03-17 00:59:11.551688 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-dc88f193-a403-571c-9716-867079cb0a77', 'data_vg': 'ceph-dc88f193-a403-571c-9716-867079cb0a77'}) 2026-03-17 00:59:11.551694 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-45fdc78c-b598-5156-b36d-ba4cd7c12386', 'data_vg': 'ceph-45fdc78c-b598-5156-b36d-ba4cd7c12386'}) 2026-03-17 00:59:11.551704 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-3c41c00e-01b2-5de9-9d7e-31888b7f9771', 'data_vg': 'ceph-3c41c00e-01b2-5de9-9d7e-31888b7f9771'}) 2026-03-17 00:59:11.551710 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-9e905ad0-9805-5328-aec5-92944dddbd57', 'data_vg': 'ceph-9e905ad0-9805-5328-aec5-92944dddbd57'}) 2026-03-17 00:59:11.551715 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-2b5d6da3-626f-5c09-a421-20ac1510e3d2', 'data_vg': 'ceph-2b5d6da3-626f-5c09-a421-20ac1510e3d2'}) 2026-03-17 00:59:11.551721 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-b1b21aa2-16de-5cd3-9497-37bc0f66c5a5', 'data_vg': 'ceph-b1b21aa2-16de-5cd3-9497-37bc0f66c5a5'}) 2026-03-17 00:59:11.551726 | orchestrator | 2026-03-17 00:59:11.551732 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-03-17 00:59:11.551738 | orchestrator | Tuesday 17 March 2026 00:56:01 +0000 (0:00:36.113) 0:07:21.325 ********* 2026-03-17 00:59:11.551743 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.551749 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.551755 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.551760 | orchestrator | 2026-03-17 00:59:11.551766 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-03-17 00:59:11.551772 | orchestrator | Tuesday 17 March 2026 00:56:01 +0000 (0:00:00.415) 0:07:21.741 ********* 2026-03-17 00:59:11.551778 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:59:11.551795 | orchestrator | 2026-03-17 00:59:11.551801 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-03-17 00:59:11.551807 | orchestrator | Tuesday 17 March 2026 00:56:02 +0000 (0:00:00.451) 0:07:22.192 ********* 2026-03-17 00:59:11.551812 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:59:11.551818 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:59:11.551824 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:59:11.551829 | orchestrator | 2026-03-17 00:59:11.551835 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-03-17 00:59:11.551841 | orchestrator | Tuesday 17 March 2026 00:56:02 +0000 (0:00:00.567) 0:07:22.759 ********* 2026-03-17 00:59:11.551846 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:59:11.551852 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:59:11.551857 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:59:11.551862 | orchestrator | 2026-03-17 00:59:11.551868 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-03-17 00:59:11.551874 | orchestrator | Tuesday 17 March 2026 00:56:05 +0000 (0:00:02.379) 0:07:25.139 ********* 2026-03-17 00:59:11.551880 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:59:11.551886 | orchestrator | 2026-03-17 00:59:11.551893 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-03-17 00:59:11.551899 | orchestrator | Tuesday 17 March 2026 00:56:05 +0000 (0:00:00.499) 0:07:25.638 ********* 2026-03-17 00:59:11.551918 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:59:11.551925 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:59:11.551931 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:59:11.551937 | orchestrator | 2026-03-17 00:59:11.551941 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-03-17 00:59:11.551945 | orchestrator | Tuesday 17 March 2026 00:56:06 +0000 (0:00:01.287) 0:07:26.926 ********* 2026-03-17 00:59:11.551948 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:59:11.551952 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:59:11.551956 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:59:11.551959 | orchestrator | 2026-03-17 00:59:11.551963 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-03-17 00:59:11.551970 | orchestrator | Tuesday 17 March 2026 00:56:08 +0000 (0:00:01.477) 0:07:28.403 ********* 2026-03-17 00:59:11.551974 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:59:11.551978 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:59:11.551981 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:59:11.551985 | orchestrator | 2026-03-17 00:59:11.551989 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-03-17 00:59:11.551992 | orchestrator | Tuesday 17 March 2026 00:56:10 +0000 (0:00:01.900) 0:07:30.304 ********* 2026-03-17 00:59:11.551996 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.552000 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.552003 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.552007 | orchestrator | 2026-03-17 00:59:11.552011 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-03-17 00:59:11.552015 | orchestrator | Tuesday 17 March 2026 00:56:10 +0000 (0:00:00.320) 0:07:30.624 ********* 2026-03-17 00:59:11.552018 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.552022 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.552026 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.552029 | orchestrator | 2026-03-17 00:59:11.552033 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-03-17 00:59:11.552037 | orchestrator | Tuesday 17 March 2026 00:56:10 +0000 (0:00:00.296) 0:07:30.921 ********* 2026-03-17 00:59:11.552041 | orchestrator | ok: [testbed-node-3] => (item=4) 2026-03-17 00:59:11.552044 | orchestrator | ok: [testbed-node-4] => (item=3) 2026-03-17 00:59:11.552048 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-03-17 00:59:11.552055 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-17 00:59:11.552059 | orchestrator | ok: [testbed-node-4] => (item=1) 2026-03-17 00:59:11.552063 | orchestrator | ok: [testbed-node-5] => (item=5) 2026-03-17 00:59:11.552067 | orchestrator | 2026-03-17 00:59:11.552070 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-03-17 00:59:11.552074 | orchestrator | Tuesday 17 March 2026 00:56:12 +0000 (0:00:01.284) 0:07:32.206 ********* 2026-03-17 00:59:11.552078 | orchestrator | changed: [testbed-node-3] => (item=4) 2026-03-17 00:59:11.552082 | orchestrator | changed: [testbed-node-4] => (item=3) 2026-03-17 00:59:11.552085 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-03-17 00:59:11.552089 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-03-17 00:59:11.552093 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-03-17 00:59:11.552100 | orchestrator | changed: [testbed-node-5] => (item=5) 2026-03-17 00:59:11.552104 | orchestrator | 2026-03-17 00:59:11.552108 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-03-17 00:59:11.552112 | orchestrator | Tuesday 17 March 2026 00:56:14 +0000 (0:00:02.182) 0:07:34.388 ********* 2026-03-17 00:59:11.552115 | orchestrator | changed: [testbed-node-3] => (item=4) 2026-03-17 00:59:11.552119 | orchestrator | changed: [testbed-node-4] => (item=3) 2026-03-17 00:59:11.552123 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-03-17 00:59:11.552127 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-03-17 00:59:11.552130 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-03-17 00:59:11.552134 | orchestrator | changed: [testbed-node-5] => (item=5) 2026-03-17 00:59:11.552138 | orchestrator | 2026-03-17 00:59:11.552142 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-03-17 00:59:11.552145 | orchestrator | Tuesday 17 March 2026 00:56:19 +0000 (0:00:04.887) 0:07:39.275 ********* 2026-03-17 00:59:11.552149 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.552154 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.552160 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-17 00:59:11.552165 | orchestrator | 2026-03-17 00:59:11.552175 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-03-17 00:59:11.552181 | orchestrator | Tuesday 17 March 2026 00:56:22 +0000 (0:00:03.313) 0:07:42.589 ********* 2026-03-17 00:59:11.552186 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.552192 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.552198 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-03-17 00:59:11.552204 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-17 00:59:11.552210 | orchestrator | 2026-03-17 00:59:11.552215 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-03-17 00:59:11.552221 | orchestrator | Tuesday 17 March 2026 00:56:35 +0000 (0:00:12.930) 0:07:55.519 ********* 2026-03-17 00:59:11.552227 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.552233 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.552238 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.552245 | orchestrator | 2026-03-17 00:59:11.552251 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-17 00:59:11.552257 | orchestrator | Tuesday 17 March 2026 00:56:36 +0000 (0:00:00.700) 0:07:56.219 ********* 2026-03-17 00:59:11.552263 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.552269 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.552275 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.552281 | orchestrator | 2026-03-17 00:59:11.552288 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-17 00:59:11.552292 | orchestrator | Tuesday 17 March 2026 00:56:36 +0000 (0:00:00.452) 0:07:56.672 ********* 2026-03-17 00:59:11.552296 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:59:11.552300 | orchestrator | 2026-03-17 00:59:11.552309 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-03-17 00:59:11.552313 | orchestrator | Tuesday 17 March 2026 00:56:37 +0000 (0:00:00.452) 0:07:57.124 ********* 2026-03-17 00:59:11.552317 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-17 00:59:11.552321 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-17 00:59:11.552325 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-17 00:59:11.552329 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.552332 | orchestrator | 2026-03-17 00:59:11.552339 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-03-17 00:59:11.552343 | orchestrator | Tuesday 17 March 2026 00:56:37 +0000 (0:00:00.335) 0:07:57.460 ********* 2026-03-17 00:59:11.552347 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.552350 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.552354 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.552358 | orchestrator | 2026-03-17 00:59:11.552362 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-03-17 00:59:11.552365 | orchestrator | Tuesday 17 March 2026 00:56:37 +0000 (0:00:00.252) 0:07:57.712 ********* 2026-03-17 00:59:11.552369 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.552373 | orchestrator | 2026-03-17 00:59:11.552376 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-03-17 00:59:11.552380 | orchestrator | Tuesday 17 March 2026 00:56:37 +0000 (0:00:00.175) 0:07:57.888 ********* 2026-03-17 00:59:11.552384 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.552388 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.552391 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.552395 | orchestrator | 2026-03-17 00:59:11.552399 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-03-17 00:59:11.552402 | orchestrator | Tuesday 17 March 2026 00:56:38 +0000 (0:00:00.451) 0:07:58.339 ********* 2026-03-17 00:59:11.552406 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.552410 | orchestrator | 2026-03-17 00:59:11.552414 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-03-17 00:59:11.552417 | orchestrator | Tuesday 17 March 2026 00:56:38 +0000 (0:00:00.185) 0:07:58.525 ********* 2026-03-17 00:59:11.552421 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.552425 | orchestrator | 2026-03-17 00:59:11.552428 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-03-17 00:59:11.552432 | orchestrator | Tuesday 17 March 2026 00:56:38 +0000 (0:00:00.175) 0:07:58.700 ********* 2026-03-17 00:59:11.552437 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.552443 | orchestrator | 2026-03-17 00:59:11.552449 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-03-17 00:59:11.552455 | orchestrator | Tuesday 17 March 2026 00:56:38 +0000 (0:00:00.093) 0:07:58.794 ********* 2026-03-17 00:59:11.552460 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.552466 | orchestrator | 2026-03-17 00:59:11.552476 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-03-17 00:59:11.552483 | orchestrator | Tuesday 17 March 2026 00:56:38 +0000 (0:00:00.176) 0:07:58.971 ********* 2026-03-17 00:59:11.552490 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.552496 | orchestrator | 2026-03-17 00:59:11.552503 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-03-17 00:59:11.552509 | orchestrator | Tuesday 17 March 2026 00:56:39 +0000 (0:00:00.179) 0:07:59.150 ********* 2026-03-17 00:59:11.552516 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-17 00:59:11.552520 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-17 00:59:11.552524 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-17 00:59:11.552528 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.552531 | orchestrator | 2026-03-17 00:59:11.552535 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-03-17 00:59:11.552543 | orchestrator | Tuesday 17 March 2026 00:56:39 +0000 (0:00:00.336) 0:07:59.487 ********* 2026-03-17 00:59:11.552546 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.552550 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.552554 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.552557 | orchestrator | 2026-03-17 00:59:11.552561 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-03-17 00:59:11.552565 | orchestrator | Tuesday 17 March 2026 00:56:39 +0000 (0:00:00.263) 0:07:59.750 ********* 2026-03-17 00:59:11.552569 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.552572 | orchestrator | 2026-03-17 00:59:11.552577 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-03-17 00:59:11.552583 | orchestrator | Tuesday 17 March 2026 00:56:40 +0000 (0:00:00.561) 0:08:00.312 ********* 2026-03-17 00:59:11.552589 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.552595 | orchestrator | 2026-03-17 00:59:11.552600 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-03-17 00:59:11.552606 | orchestrator | 2026-03-17 00:59:11.552611 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-17 00:59:11.552617 | orchestrator | Tuesday 17 March 2026 00:56:40 +0000 (0:00:00.650) 0:08:00.962 ********* 2026-03-17 00:59:11.552622 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:59:11.552628 | orchestrator | 2026-03-17 00:59:11.552634 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-17 00:59:11.552639 | orchestrator | Tuesday 17 March 2026 00:56:42 +0000 (0:00:01.160) 0:08:02.123 ********* 2026-03-17 00:59:11.552645 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:59:11.552650 | orchestrator | 2026-03-17 00:59:11.552656 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-17 00:59:11.552662 | orchestrator | Tuesday 17 March 2026 00:56:43 +0000 (0:00:00.990) 0:08:03.114 ********* 2026-03-17 00:59:11.552668 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.552673 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.552679 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.552684 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:59:11.552690 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:59:11.552695 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:59:11.552701 | orchestrator | 2026-03-17 00:59:11.552708 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-17 00:59:11.552714 | orchestrator | Tuesday 17 March 2026 00:56:44 +0000 (0:00:01.080) 0:08:04.195 ********* 2026-03-17 00:59:11.552724 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.552730 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.552736 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:59:11.552742 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.552747 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:59:11.552752 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:59:11.552758 | orchestrator | 2026-03-17 00:59:11.552764 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-17 00:59:11.552769 | orchestrator | Tuesday 17 March 2026 00:56:44 +0000 (0:00:00.682) 0:08:04.877 ********* 2026-03-17 00:59:11.552775 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.552780 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.552786 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:59:11.552791 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.552797 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:59:11.552803 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:59:11.552808 | orchestrator | 2026-03-17 00:59:11.552814 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-17 00:59:11.552819 | orchestrator | Tuesday 17 March 2026 00:56:45 +0000 (0:00:00.676) 0:08:05.553 ********* 2026-03-17 00:59:11.552830 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.552836 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.552841 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.552847 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:59:11.552853 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:59:11.552859 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:59:11.552865 | orchestrator | 2026-03-17 00:59:11.552870 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-17 00:59:11.552876 | orchestrator | Tuesday 17 March 2026 00:56:46 +0000 (0:00:00.945) 0:08:06.499 ********* 2026-03-17 00:59:11.552882 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.552888 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.552894 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.552900 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:59:11.552920 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:59:11.552927 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:59:11.552932 | orchestrator | 2026-03-17 00:59:11.552938 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-17 00:59:11.552944 | orchestrator | Tuesday 17 March 2026 00:56:47 +0000 (0:00:01.057) 0:08:07.556 ********* 2026-03-17 00:59:11.552950 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.552956 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.552968 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.552974 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.552979 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.552985 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.552990 | orchestrator | 2026-03-17 00:59:11.552996 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-17 00:59:11.553002 | orchestrator | Tuesday 17 March 2026 00:56:48 +0000 (0:00:00.782) 0:08:08.339 ********* 2026-03-17 00:59:11.553008 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.553014 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.553020 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.553026 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.553031 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.553038 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.553043 | orchestrator | 2026-03-17 00:59:11.553048 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-17 00:59:11.553055 | orchestrator | Tuesday 17 March 2026 00:56:48 +0000 (0:00:00.564) 0:08:08.904 ********* 2026-03-17 00:59:11.553060 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:59:11.553066 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:59:11.553072 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:59:11.553078 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:59:11.553084 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:59:11.553091 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:59:11.553100 | orchestrator | 2026-03-17 00:59:11.553107 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-17 00:59:11.553112 | orchestrator | Tuesday 17 March 2026 00:56:50 +0000 (0:00:01.204) 0:08:10.108 ********* 2026-03-17 00:59:11.553118 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:59:11.553124 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:59:11.553129 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:59:11.553135 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:59:11.553141 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:59:11.553146 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:59:11.553152 | orchestrator | 2026-03-17 00:59:11.553157 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-17 00:59:11.553162 | orchestrator | Tuesday 17 March 2026 00:56:51 +0000 (0:00:01.158) 0:08:11.266 ********* 2026-03-17 00:59:11.553168 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.553174 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.553180 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.553186 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.553196 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.553203 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.553212 | orchestrator | 2026-03-17 00:59:11.553218 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-17 00:59:11.553224 | orchestrator | Tuesday 17 March 2026 00:56:51 +0000 (0:00:00.716) 0:08:11.983 ********* 2026-03-17 00:59:11.553229 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.553235 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.553241 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.553246 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:59:11.553252 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:59:11.553259 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:59:11.553264 | orchestrator | 2026-03-17 00:59:11.553270 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-17 00:59:11.553276 | orchestrator | Tuesday 17 March 2026 00:56:52 +0000 (0:00:00.504) 0:08:12.487 ********* 2026-03-17 00:59:11.553282 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:59:11.553288 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:59:11.553296 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:59:11.553304 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.553310 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.553316 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.553322 | orchestrator | 2026-03-17 00:59:11.553327 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-17 00:59:11.553337 | orchestrator | Tuesday 17 March 2026 00:56:53 +0000 (0:00:00.643) 0:08:13.131 ********* 2026-03-17 00:59:11.553342 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:59:11.553348 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:59:11.553353 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:59:11.553360 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.553366 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.553372 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.553377 | orchestrator | 2026-03-17 00:59:11.553383 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-17 00:59:11.553389 | orchestrator | Tuesday 17 March 2026 00:56:53 +0000 (0:00:00.489) 0:08:13.621 ********* 2026-03-17 00:59:11.553395 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:59:11.553401 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:59:11.553407 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:59:11.553413 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.553419 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.553425 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.553431 | orchestrator | 2026-03-17 00:59:11.553437 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-17 00:59:11.553443 | orchestrator | Tuesday 17 March 2026 00:56:54 +0000 (0:00:00.667) 0:08:14.289 ********* 2026-03-17 00:59:11.553450 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.553455 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.553461 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.553466 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.553472 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.553478 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.553484 | orchestrator | 2026-03-17 00:59:11.553490 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-17 00:59:11.553496 | orchestrator | Tuesday 17 March 2026 00:56:54 +0000 (0:00:00.499) 0:08:14.789 ********* 2026-03-17 00:59:11.553502 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.553509 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.553515 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.553521 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:11.553528 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:11.553535 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:11.553541 | orchestrator | 2026-03-17 00:59:11.553548 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-17 00:59:11.553560 | orchestrator | Tuesday 17 March 2026 00:56:55 +0000 (0:00:00.654) 0:08:15.443 ********* 2026-03-17 00:59:11.553566 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.553578 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.553585 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.553591 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:59:11.553598 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:59:11.553604 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:59:11.553611 | orchestrator | 2026-03-17 00:59:11.553617 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-17 00:59:11.553624 | orchestrator | Tuesday 17 March 2026 00:56:55 +0000 (0:00:00.512) 0:08:15.955 ********* 2026-03-17 00:59:11.553630 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:59:11.553637 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:59:11.553643 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:59:11.553650 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:59:11.553665 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:59:11.553676 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:59:11.553681 | orchestrator | 2026-03-17 00:59:11.553687 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-17 00:59:11.553693 | orchestrator | Tuesday 17 March 2026 00:56:56 +0000 (0:00:00.653) 0:08:16.609 ********* 2026-03-17 00:59:11.553699 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:59:11.553705 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:59:11.553711 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:59:11.553717 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:59:11.553723 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:59:11.553729 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:59:11.553736 | orchestrator | 2026-03-17 00:59:11.553743 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-03-17 00:59:11.553749 | orchestrator | Tuesday 17 March 2026 00:56:57 +0000 (0:00:01.012) 0:08:17.622 ********* 2026-03-17 00:59:11.553756 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-17 00:59:11.553763 | orchestrator | 2026-03-17 00:59:11.553769 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-03-17 00:59:11.553776 | orchestrator | Tuesday 17 March 2026 00:57:01 +0000 (0:00:03.978) 0:08:21.600 ********* 2026-03-17 00:59:11.553783 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-17 00:59:11.553789 | orchestrator | 2026-03-17 00:59:11.553796 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-03-17 00:59:11.553802 | orchestrator | Tuesday 17 March 2026 00:57:03 +0000 (0:00:02.023) 0:08:23.624 ********* 2026-03-17 00:59:11.553809 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:59:11.553815 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:59:11.553822 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:59:11.553828 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:59:11.553835 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:59:11.553841 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:59:11.553848 | orchestrator | 2026-03-17 00:59:11.553854 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-03-17 00:59:11.553861 | orchestrator | Tuesday 17 March 2026 00:57:04 +0000 (0:00:01.391) 0:08:25.015 ********* 2026-03-17 00:59:11.553867 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:59:11.553874 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:59:11.553880 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:59:11.553887 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:59:11.553893 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:59:11.553900 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:59:11.553936 | orchestrator | 2026-03-17 00:59:11.553942 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-03-17 00:59:11.553948 | orchestrator | Tuesday 17 March 2026 00:57:06 +0000 (0:00:01.098) 0:08:26.114 ********* 2026-03-17 00:59:11.553955 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:59:11.553966 | orchestrator | 2026-03-17 00:59:11.553972 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-03-17 00:59:11.553982 | orchestrator | Tuesday 17 March 2026 00:57:07 +0000 (0:00:01.005) 0:08:27.120 ********* 2026-03-17 00:59:11.553989 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:59:11.553995 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:59:11.554002 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:59:11.554008 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:59:11.554046 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:59:11.554053 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:59:11.554060 | orchestrator | 2026-03-17 00:59:11.554067 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-03-17 00:59:11.554074 | orchestrator | Tuesday 17 March 2026 00:57:08 +0000 (0:00:01.525) 0:08:28.645 ********* 2026-03-17 00:59:11.554081 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:59:11.554088 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:59:11.554095 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:59:11.554102 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:59:11.554109 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:59:11.554115 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:59:11.554122 | orchestrator | 2026-03-17 00:59:11.554129 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-03-17 00:59:11.554136 | orchestrator | Tuesday 17 March 2026 00:57:12 +0000 (0:00:03.525) 0:08:32.170 ********* 2026-03-17 00:59:11.554143 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:59:11.554150 | orchestrator | 2026-03-17 00:59:11.554157 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-03-17 00:59:11.554164 | orchestrator | Tuesday 17 March 2026 00:57:13 +0000 (0:00:01.087) 0:08:33.257 ********* 2026-03-17 00:59:11.554170 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:59:11.554176 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:59:11.554182 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:59:11.554188 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:59:11.554193 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:59:11.554200 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:59:11.554206 | orchestrator | 2026-03-17 00:59:11.554212 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-03-17 00:59:11.554217 | orchestrator | Tuesday 17 March 2026 00:57:13 +0000 (0:00:00.558) 0:08:33.815 ********* 2026-03-17 00:59:11.554223 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:59:11.554234 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:59:11.554241 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:59:11.554247 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:59:11.554253 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:59:11.554259 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:59:11.554265 | orchestrator | 2026-03-17 00:59:11.554271 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-03-17 00:59:11.554278 | orchestrator | Tuesday 17 March 2026 00:57:16 +0000 (0:00:02.388) 0:08:36.203 ********* 2026-03-17 00:59:11.554284 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:59:11.554289 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:59:11.554295 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:59:11.554301 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:59:11.554307 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:59:11.554313 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:59:11.554320 | orchestrator | 2026-03-17 00:59:11.554325 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-03-17 00:59:11.554331 | orchestrator | 2026-03-17 00:59:11.554336 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-17 00:59:11.554342 | orchestrator | Tuesday 17 March 2026 00:57:16 +0000 (0:00:00.761) 0:08:36.965 ********* 2026-03-17 00:59:11.554353 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:59:11.554359 | orchestrator | 2026-03-17 00:59:11.554364 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-17 00:59:11.554370 | orchestrator | Tuesday 17 March 2026 00:57:17 +0000 (0:00:00.712) 0:08:37.677 ********* 2026-03-17 00:59:11.554376 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:59:11.554382 | orchestrator | 2026-03-17 00:59:11.554389 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-17 00:59:11.554395 | orchestrator | Tuesday 17 March 2026 00:57:18 +0000 (0:00:00.479) 0:08:38.157 ********* 2026-03-17 00:59:11.554401 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.554408 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.554414 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.554419 | orchestrator | 2026-03-17 00:59:11.554425 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-17 00:59:11.554431 | orchestrator | Tuesday 17 March 2026 00:57:18 +0000 (0:00:00.409) 0:08:38.566 ********* 2026-03-17 00:59:11.554437 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:59:11.554443 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:59:11.554449 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:59:11.554456 | orchestrator | 2026-03-17 00:59:11.554462 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-17 00:59:11.554468 | orchestrator | Tuesday 17 March 2026 00:57:19 +0000 (0:00:00.620) 0:08:39.187 ********* 2026-03-17 00:59:11.554474 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:59:11.554480 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:59:11.554487 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:59:11.554493 | orchestrator | 2026-03-17 00:59:11.554499 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-17 00:59:11.554505 | orchestrator | Tuesday 17 March 2026 00:57:19 +0000 (0:00:00.600) 0:08:39.788 ********* 2026-03-17 00:59:11.554511 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:59:11.554517 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:59:11.554523 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:59:11.554530 | orchestrator | 2026-03-17 00:59:11.554536 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-17 00:59:11.554542 | orchestrator | Tuesday 17 March 2026 00:57:20 +0000 (0:00:00.613) 0:08:40.401 ********* 2026-03-17 00:59:11.554548 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.554554 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.554564 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.554570 | orchestrator | 2026-03-17 00:59:11.554576 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-17 00:59:11.554583 | orchestrator | Tuesday 17 March 2026 00:57:20 +0000 (0:00:00.615) 0:08:41.017 ********* 2026-03-17 00:59:11.554589 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.554596 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.554602 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.554609 | orchestrator | 2026-03-17 00:59:11.554615 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-17 00:59:11.554621 | orchestrator | Tuesday 17 March 2026 00:57:21 +0000 (0:00:00.244) 0:08:41.262 ********* 2026-03-17 00:59:11.554628 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.554634 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.554640 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.554647 | orchestrator | 2026-03-17 00:59:11.554653 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-17 00:59:11.554660 | orchestrator | Tuesday 17 March 2026 00:57:21 +0000 (0:00:00.244) 0:08:41.506 ********* 2026-03-17 00:59:11.554666 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:59:11.554672 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:59:11.554682 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:59:11.554688 | orchestrator | 2026-03-17 00:59:11.554695 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-17 00:59:11.554702 | orchestrator | Tuesday 17 March 2026 00:57:22 +0000 (0:00:00.731) 0:08:42.238 ********* 2026-03-17 00:59:11.554708 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:59:11.554715 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:59:11.554720 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:59:11.554726 | orchestrator | 2026-03-17 00:59:11.554732 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-17 00:59:11.554738 | orchestrator | Tuesday 17 March 2026 00:57:23 +0000 (0:00:00.914) 0:08:43.152 ********* 2026-03-17 00:59:11.554743 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.554749 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.554755 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.554761 | orchestrator | 2026-03-17 00:59:11.554767 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-17 00:59:11.554773 | orchestrator | Tuesday 17 March 2026 00:57:23 +0000 (0:00:00.288) 0:08:43.441 ********* 2026-03-17 00:59:11.554780 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.554792 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.554799 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.554805 | orchestrator | 2026-03-17 00:59:11.554810 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-17 00:59:11.554816 | orchestrator | Tuesday 17 March 2026 00:57:23 +0000 (0:00:00.311) 0:08:43.753 ********* 2026-03-17 00:59:11.554822 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:59:11.554828 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:59:11.554835 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:59:11.554840 | orchestrator | 2026-03-17 00:59:11.554846 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-17 00:59:11.554853 | orchestrator | Tuesday 17 March 2026 00:57:24 +0000 (0:00:00.299) 0:08:44.052 ********* 2026-03-17 00:59:11.554858 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:59:11.554864 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:59:11.554870 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:59:11.554877 | orchestrator | 2026-03-17 00:59:11.554883 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-17 00:59:11.554889 | orchestrator | Tuesday 17 March 2026 00:57:24 +0000 (0:00:00.608) 0:08:44.660 ********* 2026-03-17 00:59:11.554895 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:59:11.554901 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:59:11.554945 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:59:11.554952 | orchestrator | 2026-03-17 00:59:11.554958 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-17 00:59:11.554964 | orchestrator | Tuesday 17 March 2026 00:57:24 +0000 (0:00:00.340) 0:08:45.001 ********* 2026-03-17 00:59:11.554969 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.554975 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.554981 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.554987 | orchestrator | 2026-03-17 00:59:11.554992 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-17 00:59:11.554999 | orchestrator | Tuesday 17 March 2026 00:57:25 +0000 (0:00:00.285) 0:08:45.286 ********* 2026-03-17 00:59:11.555005 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.555011 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.555017 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.555023 | orchestrator | 2026-03-17 00:59:11.555029 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-17 00:59:11.555035 | orchestrator | Tuesday 17 March 2026 00:57:25 +0000 (0:00:00.285) 0:08:45.572 ********* 2026-03-17 00:59:11.555040 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.555046 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.555051 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.555057 | orchestrator | 2026-03-17 00:59:11.555062 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-17 00:59:11.555073 | orchestrator | Tuesday 17 March 2026 00:57:25 +0000 (0:00:00.415) 0:08:45.988 ********* 2026-03-17 00:59:11.555079 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:59:11.555085 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:59:11.555090 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:59:11.555095 | orchestrator | 2026-03-17 00:59:11.555101 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-17 00:59:11.555107 | orchestrator | Tuesday 17 March 2026 00:57:26 +0000 (0:00:00.259) 0:08:46.248 ********* 2026-03-17 00:59:11.555113 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:59:11.555119 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:59:11.555125 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:59:11.555131 | orchestrator | 2026-03-17 00:59:11.555136 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-03-17 00:59:11.555142 | orchestrator | Tuesday 17 March 2026 00:57:26 +0000 (0:00:00.527) 0:08:46.775 ********* 2026-03-17 00:59:11.555148 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.555154 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.555164 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-03-17 00:59:11.555171 | orchestrator | 2026-03-17 00:59:11.555177 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-03-17 00:59:11.555183 | orchestrator | Tuesday 17 March 2026 00:57:27 +0000 (0:00:00.542) 0:08:47.317 ********* 2026-03-17 00:59:11.555190 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-17 00:59:11.555196 | orchestrator | 2026-03-17 00:59:11.555204 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-03-17 00:59:11.555213 | orchestrator | Tuesday 17 March 2026 00:57:29 +0000 (0:00:01.876) 0:08:49.194 ********* 2026-03-17 00:59:11.555220 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-03-17 00:59:11.555227 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.555232 | orchestrator | 2026-03-17 00:59:11.555238 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-03-17 00:59:11.555245 | orchestrator | Tuesday 17 March 2026 00:57:29 +0000 (0:00:00.198) 0:08:49.392 ********* 2026-03-17 00:59:11.555253 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-17 00:59:11.555265 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-17 00:59:11.555271 | orchestrator | 2026-03-17 00:59:11.555277 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-03-17 00:59:11.555284 | orchestrator | Tuesday 17 March 2026 00:57:37 +0000 (0:00:08.332) 0:08:57.725 ********* 2026-03-17 00:59:11.555296 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-17 00:59:11.555302 | orchestrator | 2026-03-17 00:59:11.555309 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-03-17 00:59:11.555315 | orchestrator | Tuesday 17 March 2026 00:57:41 +0000 (0:00:03.488) 0:09:01.214 ********* 2026-03-17 00:59:11.555320 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:59:11.555327 | orchestrator | 2026-03-17 00:59:11.555336 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-03-17 00:59:11.555343 | orchestrator | Tuesday 17 March 2026 00:57:41 +0000 (0:00:00.794) 0:09:02.008 ********* 2026-03-17 00:59:11.555360 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-17 00:59:11.555366 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-17 00:59:11.555372 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-17 00:59:11.555378 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-03-17 00:59:11.555384 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-03-17 00:59:11.555390 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-03-17 00:59:11.555396 | orchestrator | 2026-03-17 00:59:11.555402 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-03-17 00:59:11.555408 | orchestrator | Tuesday 17 March 2026 00:57:42 +0000 (0:00:01.011) 0:09:03.020 ********* 2026-03-17 00:59:11.555414 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-17 00:59:11.555420 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-17 00:59:11.555425 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-17 00:59:11.555431 | orchestrator | 2026-03-17 00:59:11.555437 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-03-17 00:59:11.555442 | orchestrator | Tuesday 17 March 2026 00:57:45 +0000 (0:00:02.315) 0:09:05.335 ********* 2026-03-17 00:59:11.555448 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-17 00:59:11.555453 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-17 00:59:11.555459 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:59:11.555465 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-17 00:59:11.555470 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-17 00:59:11.555475 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:59:11.555481 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-17 00:59:11.555486 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-17 00:59:11.555492 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:59:11.555497 | orchestrator | 2026-03-17 00:59:11.555503 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-03-17 00:59:11.555509 | orchestrator | Tuesday 17 March 2026 00:57:46 +0000 (0:00:01.219) 0:09:06.555 ********* 2026-03-17 00:59:11.555515 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:59:11.555521 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:59:11.555527 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:59:11.555532 | orchestrator | 2026-03-17 00:59:11.555538 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-03-17 00:59:11.555543 | orchestrator | Tuesday 17 March 2026 00:57:48 +0000 (0:00:02.430) 0:09:08.985 ********* 2026-03-17 00:59:11.555549 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.555554 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.555560 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.555566 | orchestrator | 2026-03-17 00:59:11.555575 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-03-17 00:59:11.555580 | orchestrator | Tuesday 17 March 2026 00:57:49 +0000 (0:00:00.565) 0:09:09.551 ********* 2026-03-17 00:59:11.555586 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:59:11.555592 | orchestrator | 2026-03-17 00:59:11.555597 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-03-17 00:59:11.555602 | orchestrator | Tuesday 17 March 2026 00:57:50 +0000 (0:00:00.514) 0:09:10.066 ********* 2026-03-17 00:59:11.555607 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:59:11.555613 | orchestrator | 2026-03-17 00:59:11.555618 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-03-17 00:59:11.555624 | orchestrator | Tuesday 17 March 2026 00:57:50 +0000 (0:00:00.748) 0:09:10.814 ********* 2026-03-17 00:59:11.555629 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:59:11.555639 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:59:11.555644 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:59:11.555649 | orchestrator | 2026-03-17 00:59:11.555655 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-03-17 00:59:11.555660 | orchestrator | Tuesday 17 March 2026 00:57:51 +0000 (0:00:01.183) 0:09:11.998 ********* 2026-03-17 00:59:11.555666 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:59:11.555671 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:59:11.555676 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:59:11.555682 | orchestrator | 2026-03-17 00:59:11.555687 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-03-17 00:59:11.555692 | orchestrator | Tuesday 17 March 2026 00:57:53 +0000 (0:00:01.169) 0:09:13.167 ********* 2026-03-17 00:59:11.555698 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:59:11.555704 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:59:11.555709 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:59:11.555715 | orchestrator | 2026-03-17 00:59:11.555722 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-03-17 00:59:11.555727 | orchestrator | Tuesday 17 March 2026 00:57:55 +0000 (0:00:02.251) 0:09:15.419 ********* 2026-03-17 00:59:11.555732 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:59:11.555743 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:59:11.555749 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:59:11.555754 | orchestrator | 2026-03-17 00:59:11.555760 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-03-17 00:59:11.555766 | orchestrator | Tuesday 17 March 2026 00:57:57 +0000 (0:00:02.361) 0:09:17.780 ********* 2026-03-17 00:59:11.555772 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:59:11.555778 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:59:11.555785 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:59:11.555790 | orchestrator | 2026-03-17 00:59:11.555796 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-17 00:59:11.555802 | orchestrator | Tuesday 17 March 2026 00:57:59 +0000 (0:00:02.054) 0:09:19.835 ********* 2026-03-17 00:59:11.555808 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:59:11.555813 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:59:11.555819 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:59:11.555825 | orchestrator | 2026-03-17 00:59:11.555831 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-17 00:59:11.555837 | orchestrator | Tuesday 17 March 2026 00:58:00 +0000 (0:00:00.736) 0:09:20.571 ********* 2026-03-17 00:59:11.555843 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:59:11.555849 | orchestrator | 2026-03-17 00:59:11.555855 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-03-17 00:59:11.555861 | orchestrator | Tuesday 17 March 2026 00:58:00 +0000 (0:00:00.439) 0:09:21.011 ********* 2026-03-17 00:59:11.555867 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:59:11.555874 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:59:11.555880 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:59:11.555885 | orchestrator | 2026-03-17 00:59:11.555891 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-03-17 00:59:11.555897 | orchestrator | Tuesday 17 March 2026 00:58:01 +0000 (0:00:00.310) 0:09:21.322 ********* 2026-03-17 00:59:11.555903 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:59:11.555923 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:59:11.555929 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:59:11.555934 | orchestrator | 2026-03-17 00:59:11.555940 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-03-17 00:59:11.555946 | orchestrator | Tuesday 17 March 2026 00:58:02 +0000 (0:00:01.243) 0:09:22.565 ********* 2026-03-17 00:59:11.555952 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-17 00:59:11.555958 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-17 00:59:11.555970 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-17 00:59:11.555976 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.555982 | orchestrator | 2026-03-17 00:59:11.555988 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-03-17 00:59:11.555995 | orchestrator | Tuesday 17 March 2026 00:58:03 +0000 (0:00:00.538) 0:09:23.103 ********* 2026-03-17 00:59:11.556001 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:59:11.556007 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:59:11.556013 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:59:11.556019 | orchestrator | 2026-03-17 00:59:11.556025 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-03-17 00:59:11.556030 | orchestrator | 2026-03-17 00:59:11.556036 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-17 00:59:11.556043 | orchestrator | Tuesday 17 March 2026 00:58:03 +0000 (0:00:00.505) 0:09:23.608 ********* 2026-03-17 00:59:11.556048 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:59:11.556056 | orchestrator | 2026-03-17 00:59:11.556062 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-17 00:59:11.556073 | orchestrator | Tuesday 17 March 2026 00:58:04 +0000 (0:00:00.697) 0:09:24.305 ********* 2026-03-17 00:59:11.556079 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:59:11.556085 | orchestrator | 2026-03-17 00:59:11.556090 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-17 00:59:11.556096 | orchestrator | Tuesday 17 March 2026 00:58:04 +0000 (0:00:00.515) 0:09:24.821 ********* 2026-03-17 00:59:11.556102 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.556108 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.556114 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.556120 | orchestrator | 2026-03-17 00:59:11.556126 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-17 00:59:11.556131 | orchestrator | Tuesday 17 March 2026 00:58:05 +0000 (0:00:00.502) 0:09:25.324 ********* 2026-03-17 00:59:11.556137 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:59:11.556142 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:59:11.556148 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:59:11.556154 | orchestrator | 2026-03-17 00:59:11.556159 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-17 00:59:11.556165 | orchestrator | Tuesday 17 March 2026 00:58:05 +0000 (0:00:00.650) 0:09:25.975 ********* 2026-03-17 00:59:11.556170 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:59:11.556176 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:59:11.556183 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:59:11.556189 | orchestrator | 2026-03-17 00:59:11.556195 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-17 00:59:11.556201 | orchestrator | Tuesday 17 March 2026 00:58:06 +0000 (0:00:00.684) 0:09:26.659 ********* 2026-03-17 00:59:11.556207 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:59:11.556213 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:59:11.556219 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:59:11.556225 | orchestrator | 2026-03-17 00:59:11.556230 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-17 00:59:11.556236 | orchestrator | Tuesday 17 March 2026 00:58:07 +0000 (0:00:00.813) 0:09:27.473 ********* 2026-03-17 00:59:11.556242 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.556248 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.556255 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.556261 | orchestrator | 2026-03-17 00:59:11.556272 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-17 00:59:11.556278 | orchestrator | Tuesday 17 March 2026 00:58:07 +0000 (0:00:00.571) 0:09:28.044 ********* 2026-03-17 00:59:11.556284 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.556295 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.556301 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.556307 | orchestrator | 2026-03-17 00:59:11.556312 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-17 00:59:11.556318 | orchestrator | Tuesday 17 March 2026 00:58:08 +0000 (0:00:00.370) 0:09:28.415 ********* 2026-03-17 00:59:11.556324 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.556330 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.556335 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.556341 | orchestrator | 2026-03-17 00:59:11.556347 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-17 00:59:11.556353 | orchestrator | Tuesday 17 March 2026 00:58:08 +0000 (0:00:00.298) 0:09:28.713 ********* 2026-03-17 00:59:11.556358 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:59:11.556364 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:59:11.556370 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:59:11.556376 | orchestrator | 2026-03-17 00:59:11.556381 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-17 00:59:11.556388 | orchestrator | Tuesday 17 March 2026 00:58:09 +0000 (0:00:00.750) 0:09:29.464 ********* 2026-03-17 00:59:11.556394 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:59:11.556400 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:59:11.556405 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:59:11.556411 | orchestrator | 2026-03-17 00:59:11.556417 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-17 00:59:11.556423 | orchestrator | Tuesday 17 March 2026 00:58:10 +0000 (0:00:00.927) 0:09:30.391 ********* 2026-03-17 00:59:11.556428 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.556434 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.556440 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.556446 | orchestrator | 2026-03-17 00:59:11.556452 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-17 00:59:11.556457 | orchestrator | Tuesday 17 March 2026 00:58:10 +0000 (0:00:00.263) 0:09:30.655 ********* 2026-03-17 00:59:11.556463 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.556469 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.556475 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.556481 | orchestrator | 2026-03-17 00:59:11.556486 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-17 00:59:11.556492 | orchestrator | Tuesday 17 March 2026 00:58:10 +0000 (0:00:00.264) 0:09:30.919 ********* 2026-03-17 00:59:11.556498 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:59:11.556504 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:59:11.556510 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:59:11.556515 | orchestrator | 2026-03-17 00:59:11.556521 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-17 00:59:11.556527 | orchestrator | Tuesday 17 March 2026 00:58:11 +0000 (0:00:00.308) 0:09:31.228 ********* 2026-03-17 00:59:11.556533 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:59:11.556539 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:59:11.556544 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:59:11.556550 | orchestrator | 2026-03-17 00:59:11.556556 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-17 00:59:11.556562 | orchestrator | Tuesday 17 March 2026 00:58:11 +0000 (0:00:00.463) 0:09:31.691 ********* 2026-03-17 00:59:11.556568 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:59:11.556577 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:59:11.556586 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:59:11.556591 | orchestrator | 2026-03-17 00:59:11.556597 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-17 00:59:11.556608 | orchestrator | Tuesday 17 March 2026 00:58:11 +0000 (0:00:00.312) 0:09:32.003 ********* 2026-03-17 00:59:11.556614 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.556620 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.556625 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.556637 | orchestrator | 2026-03-17 00:59:11.556643 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-17 00:59:11.556649 | orchestrator | Tuesday 17 March 2026 00:58:12 +0000 (0:00:00.262) 0:09:32.266 ********* 2026-03-17 00:59:11.556655 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.556663 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.556671 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.556677 | orchestrator | 2026-03-17 00:59:11.556683 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-17 00:59:11.556689 | orchestrator | Tuesday 17 March 2026 00:58:12 +0000 (0:00:00.276) 0:09:32.543 ********* 2026-03-17 00:59:11.556694 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.556700 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.556706 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.556711 | orchestrator | 2026-03-17 00:59:11.556717 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-17 00:59:11.556723 | orchestrator | Tuesday 17 March 2026 00:58:12 +0000 (0:00:00.412) 0:09:32.955 ********* 2026-03-17 00:59:11.556729 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:59:11.556735 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:59:11.556741 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:59:11.556747 | orchestrator | 2026-03-17 00:59:11.556752 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-17 00:59:11.556759 | orchestrator | Tuesday 17 March 2026 00:58:13 +0000 (0:00:00.355) 0:09:33.310 ********* 2026-03-17 00:59:11.556765 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:59:11.556771 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:59:11.556777 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:59:11.556782 | orchestrator | 2026-03-17 00:59:11.556788 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-03-17 00:59:11.556794 | orchestrator | Tuesday 17 March 2026 00:58:13 +0000 (0:00:00.515) 0:09:33.825 ********* 2026-03-17 00:59:11.556800 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:59:11.556806 | orchestrator | 2026-03-17 00:59:11.556813 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-17 00:59:11.556823 | orchestrator | Tuesday 17 March 2026 00:58:14 +0000 (0:00:00.809) 0:09:34.635 ********* 2026-03-17 00:59:11.556829 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-17 00:59:11.556835 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-17 00:59:11.556838 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-17 00:59:11.556842 | orchestrator | 2026-03-17 00:59:11.556846 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-17 00:59:11.556850 | orchestrator | Tuesday 17 March 2026 00:58:16 +0000 (0:00:01.906) 0:09:36.541 ********* 2026-03-17 00:59:11.556853 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-17 00:59:11.556857 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-17 00:59:11.556861 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:59:11.556865 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-17 00:59:11.556868 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-17 00:59:11.556872 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:59:11.556876 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-17 00:59:11.556880 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-17 00:59:11.556883 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:59:11.556887 | orchestrator | 2026-03-17 00:59:11.556891 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-03-17 00:59:11.556895 | orchestrator | Tuesday 17 March 2026 00:58:17 +0000 (0:00:01.222) 0:09:37.764 ********* 2026-03-17 00:59:11.556898 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.556902 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.556919 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.556931 | orchestrator | 2026-03-17 00:59:11.556937 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-03-17 00:59:11.556940 | orchestrator | Tuesday 17 March 2026 00:58:18 +0000 (0:00:00.397) 0:09:38.161 ********* 2026-03-17 00:59:11.556944 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:59:11.556948 | orchestrator | 2026-03-17 00:59:11.556952 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-03-17 00:59:11.556955 | orchestrator | Tuesday 17 March 2026 00:58:18 +0000 (0:00:00.812) 0:09:38.973 ********* 2026-03-17 00:59:11.556960 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-17 00:59:11.556964 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-17 00:59:11.556968 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-17 00:59:11.556972 | orchestrator | 2026-03-17 00:59:11.556976 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-03-17 00:59:11.556980 | orchestrator | Tuesday 17 March 2026 00:58:19 +0000 (0:00:00.785) 0:09:39.759 ********* 2026-03-17 00:59:11.556983 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-17 00:59:11.556987 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-17 00:59:11.556994 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-17 00:59:11.556998 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-17 00:59:11.557002 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-17 00:59:11.557006 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-17 00:59:11.557009 | orchestrator | 2026-03-17 00:59:11.557013 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-17 00:59:11.557017 | orchestrator | Tuesday 17 March 2026 00:58:23 +0000 (0:00:03.983) 0:09:43.742 ********* 2026-03-17 00:59:11.557021 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-17 00:59:11.557024 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-17 00:59:11.557028 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-17 00:59:11.557032 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-17 00:59:11.557035 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-17 00:59:11.557039 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-17 00:59:11.557043 | orchestrator | 2026-03-17 00:59:11.557047 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-17 00:59:11.557053 | orchestrator | Tuesday 17 March 2026 00:58:26 +0000 (0:00:02.366) 0:09:46.109 ********* 2026-03-17 00:59:11.557059 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-17 00:59:11.557068 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:59:11.557077 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-17 00:59:11.557082 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:59:11.557087 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-17 00:59:11.557093 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:59:11.557098 | orchestrator | 2026-03-17 00:59:11.557103 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-03-17 00:59:11.557113 | orchestrator | Tuesday 17 March 2026 00:58:27 +0000 (0:00:00.956) 0:09:47.066 ********* 2026-03-17 00:59:11.557123 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-03-17 00:59:11.557129 | orchestrator | 2026-03-17 00:59:11.557135 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-03-17 00:59:11.557141 | orchestrator | Tuesday 17 March 2026 00:58:27 +0000 (0:00:00.206) 0:09:47.272 ********* 2026-03-17 00:59:11.557146 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-17 00:59:11.557152 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-17 00:59:11.557157 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-17 00:59:11.557163 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-17 00:59:11.557169 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-17 00:59:11.557176 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.557182 | orchestrator | 2026-03-17 00:59:11.557187 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-03-17 00:59:11.557192 | orchestrator | Tuesday 17 March 2026 00:58:27 +0000 (0:00:00.512) 0:09:47.785 ********* 2026-03-17 00:59:11.557198 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-17 00:59:11.557203 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-17 00:59:11.557209 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-17 00:59:11.557216 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-17 00:59:11.557222 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-17 00:59:11.557228 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.557234 | orchestrator | 2026-03-17 00:59:11.557241 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-03-17 00:59:11.557247 | orchestrator | Tuesday 17 March 2026 00:58:28 +0000 (0:00:00.578) 0:09:48.363 ********* 2026-03-17 00:59:11.557253 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-17 00:59:11.557260 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-17 00:59:11.557270 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-17 00:59:11.557275 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-17 00:59:11.557279 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-17 00:59:11.557283 | orchestrator | 2026-03-17 00:59:11.557287 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-03-17 00:59:11.557290 | orchestrator | Tuesday 17 March 2026 00:58:57 +0000 (0:00:29.277) 0:10:17.641 ********* 2026-03-17 00:59:11.557294 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.557298 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.557305 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.557309 | orchestrator | 2026-03-17 00:59:11.557312 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-03-17 00:59:11.557316 | orchestrator | Tuesday 17 March 2026 00:58:57 +0000 (0:00:00.257) 0:10:17.898 ********* 2026-03-17 00:59:11.557320 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.557324 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.557327 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.557331 | orchestrator | 2026-03-17 00:59:11.557335 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-03-17 00:59:11.557339 | orchestrator | Tuesday 17 March 2026 00:58:58 +0000 (0:00:00.497) 0:10:18.395 ********* 2026-03-17 00:59:11.557342 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:59:11.557346 | orchestrator | 2026-03-17 00:59:11.557350 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-03-17 00:59:11.557354 | orchestrator | Tuesday 17 March 2026 00:58:58 +0000 (0:00:00.470) 0:10:18.866 ********* 2026-03-17 00:59:11.557357 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:59:11.557361 | orchestrator | 2026-03-17 00:59:11.557368 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-03-17 00:59:11.557372 | orchestrator | Tuesday 17 March 2026 00:58:59 +0000 (0:00:00.618) 0:10:19.484 ********* 2026-03-17 00:59:11.557376 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:59:11.557379 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:59:11.557383 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:59:11.557387 | orchestrator | 2026-03-17 00:59:11.557391 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-03-17 00:59:11.557394 | orchestrator | Tuesday 17 March 2026 00:59:00 +0000 (0:00:01.197) 0:10:20.682 ********* 2026-03-17 00:59:11.557398 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:59:11.557402 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:59:11.557405 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:59:11.557409 | orchestrator | 2026-03-17 00:59:11.557413 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-03-17 00:59:11.557417 | orchestrator | Tuesday 17 March 2026 00:59:01 +0000 (0:00:00.993) 0:10:21.675 ********* 2026-03-17 00:59:11.557420 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:59:11.557424 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:59:11.557428 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:59:11.557432 | orchestrator | 2026-03-17 00:59:11.557435 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-03-17 00:59:11.557439 | orchestrator | Tuesday 17 March 2026 00:59:03 +0000 (0:00:01.902) 0:10:23.578 ********* 2026-03-17 00:59:11.557443 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-17 00:59:11.557447 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-17 00:59:11.557450 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-17 00:59:11.557454 | orchestrator | 2026-03-17 00:59:11.557458 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-17 00:59:11.557462 | orchestrator | Tuesday 17 March 2026 00:59:05 +0000 (0:00:02.289) 0:10:25.867 ********* 2026-03-17 00:59:11.557465 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.557470 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.557476 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.557485 | orchestrator | 2026-03-17 00:59:11.557492 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-17 00:59:11.557498 | orchestrator | Tuesday 17 March 2026 00:59:06 +0000 (0:00:00.286) 0:10:26.153 ********* 2026-03-17 00:59:11.557508 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:59:11.557514 | orchestrator | 2026-03-17 00:59:11.557520 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-03-17 00:59:11.557526 | orchestrator | Tuesday 17 March 2026 00:59:06 +0000 (0:00:00.597) 0:10:26.751 ********* 2026-03-17 00:59:11.557531 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:59:11.557537 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:59:11.557543 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:59:11.557549 | orchestrator | 2026-03-17 00:59:11.557555 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-03-17 00:59:11.557562 | orchestrator | Tuesday 17 March 2026 00:59:06 +0000 (0:00:00.275) 0:10:27.027 ********* 2026-03-17 00:59:11.557568 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.557574 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:59:11.557580 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:59:11.557587 | orchestrator | 2026-03-17 00:59:11.557590 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-03-17 00:59:11.557597 | orchestrator | Tuesday 17 March 2026 00:59:07 +0000 (0:00:00.289) 0:10:27.316 ********* 2026-03-17 00:59:11.557601 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-17 00:59:11.557604 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-17 00:59:11.557608 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-17 00:59:11.557612 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:59:11.557616 | orchestrator | 2026-03-17 00:59:11.557619 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-03-17 00:59:11.557623 | orchestrator | Tuesday 17 March 2026 00:59:08 +0000 (0:00:00.856) 0:10:28.172 ********* 2026-03-17 00:59:11.557627 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:59:11.557632 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:59:11.557640 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:59:11.557648 | orchestrator | 2026-03-17 00:59:11.557653 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:59:11.557660 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-03-17 00:59:11.557667 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-03-17 00:59:11.557673 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-03-17 00:59:11.557679 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-03-17 00:59:11.557686 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-03-17 00:59:11.557693 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-03-17 00:59:11.557697 | orchestrator | 2026-03-17 00:59:11.557701 | orchestrator | 2026-03-17 00:59:11.557705 | orchestrator | 2026-03-17 00:59:11.557709 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:59:11.557712 | orchestrator | Tuesday 17 March 2026 00:59:08 +0000 (0:00:00.216) 0:10:28.389 ********* 2026-03-17 00:59:11.557716 | orchestrator | =============================================================================== 2026-03-17 00:59:11.557720 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 53.85s 2026-03-17 00:59:11.557726 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 36.73s 2026-03-17 00:59:11.557732 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 36.11s 2026-03-17 00:59:11.557749 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 29.28s 2026-03-17 00:59:11.557756 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 13.00s 2026-03-17 00:59:11.557762 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.93s 2026-03-17 00:59:11.557768 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.34s 2026-03-17 00:59:11.557774 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 8.62s 2026-03-17 00:59:11.557781 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.33s 2026-03-17 00:59:11.557788 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.39s 2026-03-17 00:59:11.557794 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 5.63s 2026-03-17 00:59:11.557800 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 4.89s 2026-03-17 00:59:11.557806 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 4.57s 2026-03-17 00:59:11.557810 | orchestrator | ceph-osd : Apply operating system tuning -------------------------------- 4.11s 2026-03-17 00:59:11.557814 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 3.98s 2026-03-17 00:59:11.557818 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 3.98s 2026-03-17 00:59:11.557821 | orchestrator | ceph-container-common : Get ceph version -------------------------------- 3.55s 2026-03-17 00:59:11.557825 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.53s 2026-03-17 00:59:11.557829 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.49s 2026-03-17 00:59:11.557833 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 3.41s 2026-03-17 00:59:11.557836 | orchestrator | 2026-03-17 00:59:11 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:59:14.576214 | orchestrator | 2026-03-17 00:59:14 | INFO  | Task fece6fe6-b4a0-4d55-80c3-078950cc5047 is in state STARTED 2026-03-17 00:59:14.578150 | orchestrator | 2026-03-17 00:59:14 | INFO  | Task ecd8fbbe-bd6c-4b32-8687-052d26ebc270 is in state STARTED 2026-03-17 00:59:14.580323 | orchestrator | 2026-03-17 00:59:14 | INFO  | Task a29c3e99-e817-4f42-bd0d-49b5c77ba5ae is in state STARTED 2026-03-17 00:59:14.580366 | orchestrator | 2026-03-17 00:59:14 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:59:17.627368 | orchestrator | 2026-03-17 00:59:17 | INFO  | Task fece6fe6-b4a0-4d55-80c3-078950cc5047 is in state STARTED 2026-03-17 00:59:17.628561 | orchestrator | 2026-03-17 00:59:17 | INFO  | Task ecd8fbbe-bd6c-4b32-8687-052d26ebc270 is in state STARTED 2026-03-17 00:59:17.630135 | orchestrator | 2026-03-17 00:59:17 | INFO  | Task a29c3e99-e817-4f42-bd0d-49b5c77ba5ae is in state STARTED 2026-03-17 00:59:17.630168 | orchestrator | 2026-03-17 00:59:17 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:59:20.680126 | orchestrator | 2026-03-17 00:59:20 | INFO  | Task fece6fe6-b4a0-4d55-80c3-078950cc5047 is in state STARTED 2026-03-17 00:59:20.682090 | orchestrator | 2026-03-17 00:59:20 | INFO  | Task ecd8fbbe-bd6c-4b32-8687-052d26ebc270 is in state STARTED 2026-03-17 00:59:20.683998 | orchestrator | 2026-03-17 00:59:20 | INFO  | Task a29c3e99-e817-4f42-bd0d-49b5c77ba5ae is in state STARTED 2026-03-17 00:59:20.684186 | orchestrator | 2026-03-17 00:59:20 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:59:23.734808 | orchestrator | 2026-03-17 00:59:23 | INFO  | Task fece6fe6-b4a0-4d55-80c3-078950cc5047 is in state STARTED 2026-03-17 00:59:23.736522 | orchestrator | 2026-03-17 00:59:23 | INFO  | Task ecd8fbbe-bd6c-4b32-8687-052d26ebc270 is in state STARTED 2026-03-17 00:59:23.738153 | orchestrator | 2026-03-17 00:59:23 | INFO  | Task a29c3e99-e817-4f42-bd0d-49b5c77ba5ae is in state STARTED 2026-03-17 00:59:23.738393 | orchestrator | 2026-03-17 00:59:23 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:59:26.782324 | orchestrator | 2026-03-17 00:59:26 | INFO  | Task fece6fe6-b4a0-4d55-80c3-078950cc5047 is in state STARTED 2026-03-17 00:59:26.783944 | orchestrator | 2026-03-17 00:59:26.784010 | orchestrator | 2026-03-17 00:59:26 | INFO  | Task ecd8fbbe-bd6c-4b32-8687-052d26ebc270 is in state SUCCESS 2026-03-17 00:59:26.785382 | orchestrator | 2026-03-17 00:59:26.785433 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-17 00:59:26.785441 | orchestrator | 2026-03-17 00:59:26.785445 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-17 00:59:26.785450 | orchestrator | Tuesday 17 March 2026 00:57:05 +0000 (0:00:00.273) 0:00:00.273 ********* 2026-03-17 00:59:26.785454 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:59:26.785459 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:59:26.785463 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:59:26.785467 | orchestrator | 2026-03-17 00:59:26.785471 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-17 00:59:26.785475 | orchestrator | Tuesday 17 March 2026 00:57:06 +0000 (0:00:00.248) 0:00:00.522 ********* 2026-03-17 00:59:26.785480 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-03-17 00:59:26.785484 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-03-17 00:59:26.785488 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-03-17 00:59:26.785500 | orchestrator | 2026-03-17 00:59:26.785510 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-03-17 00:59:26.785514 | orchestrator | 2026-03-17 00:59:26.785518 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-17 00:59:26.785522 | orchestrator | Tuesday 17 March 2026 00:57:06 +0000 (0:00:00.278) 0:00:00.800 ********* 2026-03-17 00:59:26.785526 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:59:26.785530 | orchestrator | 2026-03-17 00:59:26.785533 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-03-17 00:59:26.785537 | orchestrator | Tuesday 17 March 2026 00:57:06 +0000 (0:00:00.492) 0:00:01.293 ********* 2026-03-17 00:59:26.785541 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-17 00:59:26.785547 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-17 00:59:26.785551 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-17 00:59:26.785555 | orchestrator | 2026-03-17 00:59:26.785559 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-03-17 00:59:26.785563 | orchestrator | Tuesday 17 March 2026 00:57:07 +0000 (0:00:01.070) 0:00:02.363 ********* 2026-03-17 00:59:26.785569 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-17 00:59:26.785597 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-17 00:59:26.785643 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-17 00:59:26.785652 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-17 00:59:26.785659 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-17 00:59:26.785670 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-17 00:59:26.785680 | orchestrator | 2026-03-17 00:59:26.785686 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-17 00:59:26.785692 | orchestrator | Tuesday 17 March 2026 00:57:09 +0000 (0:00:01.364) 0:00:03.728 ********* 2026-03-17 00:59:26.785697 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:59:26.785703 | orchestrator | 2026-03-17 00:59:26.785709 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-03-17 00:59:26.785714 | orchestrator | Tuesday 17 March 2026 00:57:09 +0000 (0:00:00.364) 0:00:04.092 ********* 2026-03-17 00:59:26.785729 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-17 00:59:26.785736 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-17 00:59:26.785743 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-17 00:59:26.785756 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-17 00:59:26.785771 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-17 00:59:26.785779 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-17 00:59:26.785786 | orchestrator | 2026-03-17 00:59:26.785793 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-03-17 00:59:26.785799 | orchestrator | Tuesday 17 March 2026 00:57:12 +0000 (0:00:02.509) 0:00:06.602 ********* 2026-03-17 00:59:26.785806 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-17 00:59:26.785819 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-17 00:59:26.785824 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:26.785828 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-17 00:59:26.785835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-17 00:59:26.785840 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:26.785844 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-17 00:59:26.785854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-17 00:59:26.785858 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:26.785862 | orchestrator | 2026-03-17 00:59:26.785865 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-03-17 00:59:26.785869 | orchestrator | Tuesday 17 March 2026 00:57:12 +0000 (0:00:00.680) 0:00:07.283 ********* 2026-03-17 00:59:26.785873 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-17 00:59:26.785881 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-17 00:59:26.785886 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:26.785889 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-17 00:59:26.785930 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-17 00:59:26.785938 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:26.785944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-17 00:59:26.785956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-17 00:59:26.785962 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:26.785968 | orchestrator | 2026-03-17 00:59:26.785974 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-03-17 00:59:26.785979 | orchestrator | Tuesday 17 March 2026 00:57:14 +0000 (0:00:01.096) 0:00:08.379 ********* 2026-03-17 00:59:26.785985 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-17 00:59:26.785999 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-17 00:59:26.786006 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-17 00:59:26.786061 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-17 00:59:26.786073 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-17 00:59:26.786091 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-17 00:59:26.786098 | orchestrator | 2026-03-17 00:59:26.786105 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-03-17 00:59:26.786116 | orchestrator | Tuesday 17 March 2026 00:57:16 +0000 (0:00:02.323) 0:00:10.702 ********* 2026-03-17 00:59:26.786123 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:59:26.786130 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:59:26.786136 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:59:26.786143 | orchestrator | 2026-03-17 00:59:26.786148 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-03-17 00:59:26.786152 | orchestrator | Tuesday 17 March 2026 00:57:19 +0000 (0:00:02.923) 0:00:13.626 ********* 2026-03-17 00:59:26.786156 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:59:26.786161 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:59:26.786165 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:59:26.786169 | orchestrator | 2026-03-17 00:59:26.786173 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2026-03-17 00:59:26.786177 | orchestrator | Tuesday 17 March 2026 00:57:20 +0000 (0:00:01.339) 0:00:14.966 ********* 2026-03-17 00:59:26.786182 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-17 00:59:26.786190 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-17 00:59:26.786200 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-17 00:59:26.786207 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-17 00:59:26.786213 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-17 00:59:26.786222 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-17 00:59:26.786230 | orchestrator | 2026-03-17 00:59:26.786235 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-17 00:59:26.786239 | orchestrator | Tuesday 17 March 2026 00:57:23 +0000 (0:00:02.448) 0:00:17.415 ********* 2026-03-17 00:59:26.786244 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:26.786248 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:26.786252 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:26.786257 | orchestrator | 2026-03-17 00:59:26.786261 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-17 00:59:26.786266 | orchestrator | Tuesday 17 March 2026 00:57:23 +0000 (0:00:00.457) 0:00:17.872 ********* 2026-03-17 00:59:26.786270 | orchestrator | 2026-03-17 00:59:26.786274 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-17 00:59:26.786277 | orchestrator | Tuesday 17 March 2026 00:57:23 +0000 (0:00:00.061) 0:00:17.934 ********* 2026-03-17 00:59:26.786281 | orchestrator | 2026-03-17 00:59:26.786285 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-17 00:59:26.786289 | orchestrator | Tuesday 17 March 2026 00:57:23 +0000 (0:00:00.062) 0:00:17.997 ********* 2026-03-17 00:59:26.786293 | orchestrator | 2026-03-17 00:59:26.786296 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-03-17 00:59:26.786300 | orchestrator | Tuesday 17 March 2026 00:57:23 +0000 (0:00:00.064) 0:00:18.061 ********* 2026-03-17 00:59:26.786304 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:26.786311 | orchestrator | 2026-03-17 00:59:26.786320 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-03-17 00:59:26.786326 | orchestrator | Tuesday 17 March 2026 00:57:23 +0000 (0:00:00.215) 0:00:18.277 ********* 2026-03-17 00:59:26.786332 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:26.786338 | orchestrator | 2026-03-17 00:59:26.786344 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-03-17 00:59:26.786350 | orchestrator | Tuesday 17 March 2026 00:57:24 +0000 (0:00:00.219) 0:00:18.496 ********* 2026-03-17 00:59:26.786355 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:59:26.786360 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:59:26.786367 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:59:26.786373 | orchestrator | 2026-03-17 00:59:26.786379 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-03-17 00:59:26.786385 | orchestrator | Tuesday 17 March 2026 00:58:11 +0000 (0:00:46.977) 0:01:05.473 ********* 2026-03-17 00:59:26.786391 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:59:26.786397 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:59:26.786403 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:59:26.786409 | orchestrator | 2026-03-17 00:59:26.786416 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-17 00:59:26.786420 | orchestrator | Tuesday 17 March 2026 00:59:12 +0000 (0:01:01.579) 0:02:07.052 ********* 2026-03-17 00:59:26.786424 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:59:26.786428 | orchestrator | 2026-03-17 00:59:26.786432 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-03-17 00:59:26.786435 | orchestrator | Tuesday 17 March 2026 00:59:13 +0000 (0:00:00.637) 0:02:07.690 ********* 2026-03-17 00:59:26.786439 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:59:26.786443 | orchestrator | 2026-03-17 00:59:26.786447 | orchestrator | TASK [opensearch : Wait for OpenSearch cluster to become healthy] ************** 2026-03-17 00:59:26.786451 | orchestrator | Tuesday 17 March 2026 00:59:15 +0000 (0:00:02.237) 0:02:09.928 ********* 2026-03-17 00:59:26.786454 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:59:26.786458 | orchestrator | 2026-03-17 00:59:26.786462 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-03-17 00:59:26.786466 | orchestrator | Tuesday 17 March 2026 00:59:17 +0000 (0:00:01.816) 0:02:11.745 ********* 2026-03-17 00:59:26.786469 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:59:26.786478 | orchestrator | 2026-03-17 00:59:26.786482 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-03-17 00:59:26.786486 | orchestrator | Tuesday 17 March 2026 00:59:19 +0000 (0:00:02.193) 0:02:13.939 ********* 2026-03-17 00:59:26.786489 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:59:26.786493 | orchestrator | 2026-03-17 00:59:26.786497 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-03-17 00:59:26.786501 | orchestrator | Tuesday 17 March 2026 00:59:22 +0000 (0:00:02.459) 0:02:16.398 ********* 2026-03-17 00:59:26.786505 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:59:26.786508 | orchestrator | 2026-03-17 00:59:26.786512 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:59:26.786517 | orchestrator | testbed-node-0 : ok=19  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-17 00:59:26.786521 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-17 00:59:26.786528 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-17 00:59:26.786532 | orchestrator | 2026-03-17 00:59:26.786536 | orchestrator | 2026-03-17 00:59:26.786540 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:59:26.786544 | orchestrator | Tuesday 17 March 2026 00:59:25 +0000 (0:00:03.054) 0:02:19.453 ********* 2026-03-17 00:59:26.786548 | orchestrator | =============================================================================== 2026-03-17 00:59:26.786551 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 61.58s 2026-03-17 00:59:26.786555 | orchestrator | opensearch : Restart opensearch container ------------------------------ 46.98s 2026-03-17 00:59:26.786559 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 3.05s 2026-03-17 00:59:26.786563 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.92s 2026-03-17 00:59:26.786567 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.51s 2026-03-17 00:59:26.786570 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.46s 2026-03-17 00:59:26.786574 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.45s 2026-03-17 00:59:26.786578 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.32s 2026-03-17 00:59:26.786581 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.24s 2026-03-17 00:59:26.786585 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.19s 2026-03-17 00:59:26.786589 | orchestrator | opensearch : Wait for OpenSearch cluster to become healthy -------------- 1.82s 2026-03-17 00:59:26.786592 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.36s 2026-03-17 00:59:26.786596 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.34s 2026-03-17 00:59:26.786647 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.10s 2026-03-17 00:59:26.786656 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 1.07s 2026-03-17 00:59:26.786660 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 0.68s 2026-03-17 00:59:26.786664 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.64s 2026-03-17 00:59:26.786668 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.49s 2026-03-17 00:59:26.786672 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.46s 2026-03-17 00:59:26.786676 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.36s 2026-03-17 00:59:26.787769 | orchestrator | 2026-03-17 00:59:26 | INFO  | Task a29c3e99-e817-4f42-bd0d-49b5c77ba5ae is in state STARTED 2026-03-17 00:59:26.787828 | orchestrator | 2026-03-17 00:59:26 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:59:29.823748 | orchestrator | 2026-03-17 00:59:29 | INFO  | Task fece6fe6-b4a0-4d55-80c3-078950cc5047 is in state STARTED 2026-03-17 00:59:29.825786 | orchestrator | 2026-03-17 00:59:29 | INFO  | Task a29c3e99-e817-4f42-bd0d-49b5c77ba5ae is in state STARTED 2026-03-17 00:59:29.825832 | orchestrator | 2026-03-17 00:59:29 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:59:32.858922 | orchestrator | 2026-03-17 00:59:32 | INFO  | Task fece6fe6-b4a0-4d55-80c3-078950cc5047 is in state STARTED 2026-03-17 00:59:32.860660 | orchestrator | 2026-03-17 00:59:32 | INFO  | Task a29c3e99-e817-4f42-bd0d-49b5c77ba5ae is in state STARTED 2026-03-17 00:59:32.860705 | orchestrator | 2026-03-17 00:59:32 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:59:35.900929 | orchestrator | 2026-03-17 00:59:35 | INFO  | Task fece6fe6-b4a0-4d55-80c3-078950cc5047 is in state STARTED 2026-03-17 00:59:35.901268 | orchestrator | 2026-03-17 00:59:35 | INFO  | Task a29c3e99-e817-4f42-bd0d-49b5c77ba5ae is in state STARTED 2026-03-17 00:59:35.901303 | orchestrator | 2026-03-17 00:59:35 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:59:38.941560 | orchestrator | 2026-03-17 00:59:38 | INFO  | Task fece6fe6-b4a0-4d55-80c3-078950cc5047 is in state STARTED 2026-03-17 00:59:38.944179 | orchestrator | 2026-03-17 00:59:38 | INFO  | Task a29c3e99-e817-4f42-bd0d-49b5c77ba5ae is in state STARTED 2026-03-17 00:59:38.944223 | orchestrator | 2026-03-17 00:59:38 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:59:41.989665 | orchestrator | 2026-03-17 00:59:41 | INFO  | Task fece6fe6-b4a0-4d55-80c3-078950cc5047 is in state STARTED 2026-03-17 00:59:41.991781 | orchestrator | 2026-03-17 00:59:41 | INFO  | Task a29c3e99-e817-4f42-bd0d-49b5c77ba5ae is in state STARTED 2026-03-17 00:59:41.991810 | orchestrator | 2026-03-17 00:59:41 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:59:45.042248 | orchestrator | 2026-03-17 00:59:45 | INFO  | Task fece6fe6-b4a0-4d55-80c3-078950cc5047 is in state STARTED 2026-03-17 00:59:45.044431 | orchestrator | 2026-03-17 00:59:45 | INFO  | Task a29c3e99-e817-4f42-bd0d-49b5c77ba5ae is in state STARTED 2026-03-17 00:59:45.044485 | orchestrator | 2026-03-17 00:59:45 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:59:48.089477 | orchestrator | 2026-03-17 00:59:48 | INFO  | Task fece6fe6-b4a0-4d55-80c3-078950cc5047 is in state STARTED 2026-03-17 00:59:48.089547 | orchestrator | 2026-03-17 00:59:48 | INFO  | Task a29c3e99-e817-4f42-bd0d-49b5c77ba5ae is in state STARTED 2026-03-17 00:59:48.089554 | orchestrator | 2026-03-17 00:59:48 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:59:51.134217 | orchestrator | 2026-03-17 00:59:51 | INFO  | Task fece6fe6-b4a0-4d55-80c3-078950cc5047 is in state STARTED 2026-03-17 00:59:51.136185 | orchestrator | 2026-03-17 00:59:51 | INFO  | Task a29c3e99-e817-4f42-bd0d-49b5c77ba5ae is in state STARTED 2026-03-17 00:59:51.136425 | orchestrator | 2026-03-17 00:59:51 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:59:54.185955 | orchestrator | 2026-03-17 00:59:54 | INFO  | Task fece6fe6-b4a0-4d55-80c3-078950cc5047 is in state SUCCESS 2026-03-17 00:59:54.186525 | orchestrator | 2026-03-17 00:59:54.186549 | orchestrator | 2026-03-17 00:59:54.186554 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2026-03-17 00:59:54.186562 | orchestrator | 2026-03-17 00:59:54.186571 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-03-17 00:59:54.186581 | orchestrator | Tuesday 17 March 2026 00:57:05 +0000 (0:00:00.077) 0:00:00.077 ********* 2026-03-17 00:59:54.186609 | orchestrator | ok: [localhost] => { 2026-03-17 00:59:54.186617 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2026-03-17 00:59:54.186624 | orchestrator | } 2026-03-17 00:59:54.186630 | orchestrator | 2026-03-17 00:59:54.186636 | orchestrator | TASK [Check MariaDB service] *************************************************** 2026-03-17 00:59:54.186642 | orchestrator | Tuesday 17 March 2026 00:57:05 +0000 (0:00:00.042) 0:00:00.120 ********* 2026-03-17 00:59:54.186648 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2026-03-17 00:59:54.186656 | orchestrator | ...ignoring 2026-03-17 00:59:54.186663 | orchestrator | 2026-03-17 00:59:54.186668 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2026-03-17 00:59:54.186723 | orchestrator | Tuesday 17 March 2026 00:57:08 +0000 (0:00:02.834) 0:00:02.954 ********* 2026-03-17 00:59:54.186730 | orchestrator | skipping: [localhost] 2026-03-17 00:59:54.186733 | orchestrator | 2026-03-17 00:59:54.186737 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2026-03-17 00:59:54.186741 | orchestrator | Tuesday 17 March 2026 00:57:08 +0000 (0:00:00.042) 0:00:02.997 ********* 2026-03-17 00:59:54.186745 | orchestrator | ok: [localhost] 2026-03-17 00:59:54.186749 | orchestrator | 2026-03-17 00:59:54.186753 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-17 00:59:54.186757 | orchestrator | 2026-03-17 00:59:54.186783 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-17 00:59:54.186862 | orchestrator | Tuesday 17 March 2026 00:57:08 +0000 (0:00:00.213) 0:00:03.210 ********* 2026-03-17 00:59:54.186935 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:59:54.186945 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:59:54.186952 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:59:54.186957 | orchestrator | 2026-03-17 00:59:54.186964 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-17 00:59:54.186970 | orchestrator | Tuesday 17 March 2026 00:57:08 +0000 (0:00:00.297) 0:00:03.508 ********* 2026-03-17 00:59:54.186990 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-03-17 00:59:54.186997 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-03-17 00:59:54.187002 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-03-17 00:59:54.187009 | orchestrator | 2026-03-17 00:59:54.187015 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-03-17 00:59:54.187021 | orchestrator | 2026-03-17 00:59:54.187027 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-03-17 00:59:54.187034 | orchestrator | Tuesday 17 March 2026 00:57:09 +0000 (0:00:00.480) 0:00:03.989 ********* 2026-03-17 00:59:54.187040 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-17 00:59:54.187046 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-17 00:59:54.187053 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-17 00:59:54.187060 | orchestrator | 2026-03-17 00:59:54.187064 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-17 00:59:54.187068 | orchestrator | Tuesday 17 March 2026 00:57:09 +0000 (0:00:00.319) 0:00:04.308 ********* 2026-03-17 00:59:54.187073 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:59:54.187078 | orchestrator | 2026-03-17 00:59:54.187082 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-03-17 00:59:54.187086 | orchestrator | Tuesday 17 March 2026 00:57:10 +0000 (0:00:00.540) 0:00:04.849 ********* 2026-03-17 00:59:54.187105 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-17 00:59:54.187137 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-17 00:59:54.187146 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-17 00:59:54.187157 | orchestrator | 2026-03-17 00:59:54.187169 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-03-17 00:59:54.187175 | orchestrator | Tuesday 17 March 2026 00:57:12 +0000 (0:00:02.738) 0:00:07.588 ********* 2026-03-17 00:59:54.187180 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:54.187187 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:54.187193 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:59:54.187198 | orchestrator | 2026-03-17 00:59:54.187204 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-03-17 00:59:54.187209 | orchestrator | Tuesday 17 March 2026 00:57:13 +0000 (0:00:00.597) 0:00:08.186 ********* 2026-03-17 00:59:54.187214 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:54.187220 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:54.187226 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:59:54.187232 | orchestrator | 2026-03-17 00:59:54.187238 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-03-17 00:59:54.187243 | orchestrator | Tuesday 17 March 2026 00:57:14 +0000 (0:00:01.536) 0:00:09.723 ********* 2026-03-17 00:59:54.187252 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-17 00:59:54.187264 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-17 00:59:54.187280 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-17 00:59:54.187286 | orchestrator | 2026-03-17 00:59:54.187292 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-03-17 00:59:54.187297 | orchestrator | Tuesday 17 March 2026 00:57:18 +0000 (0:00:03.933) 0:00:13.657 ********* 2026-03-17 00:59:54.187303 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:54.187309 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:54.187316 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:59:54.187327 | orchestrator | 2026-03-17 00:59:54.187333 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-03-17 00:59:54.187340 | orchestrator | Tuesday 17 March 2026 00:57:19 +0000 (0:00:00.950) 0:00:14.607 ********* 2026-03-17 00:59:54.187344 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:59:54.187348 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:59:54.187351 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:59:54.187355 | orchestrator | 2026-03-17 00:59:54.187359 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-17 00:59:54.187363 | orchestrator | Tuesday 17 March 2026 00:57:23 +0000 (0:00:03.959) 0:00:18.566 ********* 2026-03-17 00:59:54.187367 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:59:54.187370 | orchestrator | 2026-03-17 00:59:54.187374 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-03-17 00:59:54.187378 | orchestrator | Tuesday 17 March 2026 00:57:24 +0000 (0:00:00.591) 0:00:19.157 ********* 2026-03-17 00:59:54.187387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-17 00:59:54.187391 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:54.187398 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-17 00:59:54.187406 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:54.187414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-17 00:59:54.187421 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:54.187427 | orchestrator | 2026-03-17 00:59:54.187432 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-03-17 00:59:54.187438 | orchestrator | Tuesday 17 March 2026 00:57:28 +0000 (0:00:03.661) 0:00:22.819 ********* 2026-03-17 00:59:54.187457 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-17 00:59:54.187469 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:54.187480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-17 00:59:54.187488 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:54.187497 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-17 00:59:54.187513 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:54.187519 | orchestrator | 2026-03-17 00:59:54.187525 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-03-17 00:59:54.187532 | orchestrator | Tuesday 17 March 2026 00:57:30 +0000 (0:00:01.956) 0:00:24.775 ********* 2026-03-17 00:59:54.187536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-17 00:59:54.187540 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:54.187551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-17 00:59:54.187560 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:54.187564 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-17 00:59:54.187568 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:54.187572 | orchestrator | 2026-03-17 00:59:54.187575 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2026-03-17 00:59:54.187579 | orchestrator | Tuesday 17 March 2026 00:57:32 +0000 (0:00:02.713) 0:00:27.488 ********* 2026-03-17 00:59:54.187587 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-17 00:59:54.187598 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-17 00:59:54.187606 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-17 00:59:54.187614 | orchestrator | 2026-03-17 00:59:54.187618 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-03-17 00:59:54.187621 | orchestrator | Tuesday 17 March 2026 00:57:36 +0000 (0:00:04.079) 0:00:31.568 ********* 2026-03-17 00:59:54.187625 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:59:54.187629 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:59:54.187633 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:59:54.187636 | orchestrator | 2026-03-17 00:59:54.187640 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-03-17 00:59:54.187644 | orchestrator | Tuesday 17 March 2026 00:57:37 +0000 (0:00:00.834) 0:00:32.402 ********* 2026-03-17 00:59:54.187648 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:59:54.187652 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:59:54.187655 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:59:54.187659 | orchestrator | 2026-03-17 00:59:54.187666 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-03-17 00:59:54.187670 | orchestrator | Tuesday 17 March 2026 00:57:37 +0000 (0:00:00.311) 0:00:32.713 ********* 2026-03-17 00:59:54.187673 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:59:54.187677 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:59:54.187681 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:59:54.187685 | orchestrator | 2026-03-17 00:59:54.187688 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-03-17 00:59:54.187692 | orchestrator | Tuesday 17 March 2026 00:57:38 +0000 (0:00:00.404) 0:00:33.118 ********* 2026-03-17 00:59:54.187697 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-03-17 00:59:54.187702 | orchestrator | ...ignoring 2026-03-17 00:59:54.187706 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-03-17 00:59:54.187709 | orchestrator | ...ignoring 2026-03-17 00:59:54.187713 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-03-17 00:59:54.187717 | orchestrator | ...ignoring 2026-03-17 00:59:54.187721 | orchestrator | 2026-03-17 00:59:54.187725 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-03-17 00:59:54.187728 | orchestrator | Tuesday 17 March 2026 00:57:49 +0000 (0:00:11.036) 0:00:44.155 ********* 2026-03-17 00:59:54.187732 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:59:54.187736 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:59:54.187740 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:59:54.187743 | orchestrator | 2026-03-17 00:59:54.187747 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-03-17 00:59:54.187751 | orchestrator | Tuesday 17 March 2026 00:57:49 +0000 (0:00:00.406) 0:00:44.562 ********* 2026-03-17 00:59:54.187755 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:54.187758 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:54.187762 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:54.187766 | orchestrator | 2026-03-17 00:59:54.187770 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-03-17 00:59:54.187774 | orchestrator | Tuesday 17 March 2026 00:57:50 +0000 (0:00:00.398) 0:00:44.960 ********* 2026-03-17 00:59:54.187777 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:54.187781 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:54.187785 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:54.187789 | orchestrator | 2026-03-17 00:59:54.187792 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-03-17 00:59:54.187796 | orchestrator | Tuesday 17 March 2026 00:57:50 +0000 (0:00:00.401) 0:00:45.362 ********* 2026-03-17 00:59:54.187800 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:54.187804 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:54.187807 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:54.187814 | orchestrator | 2026-03-17 00:59:54.187818 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-03-17 00:59:54.187821 | orchestrator | Tuesday 17 March 2026 00:57:51 +0000 (0:00:00.468) 0:00:45.830 ********* 2026-03-17 00:59:54.187825 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:59:54.187829 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:59:54.187833 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:59:54.187836 | orchestrator | 2026-03-17 00:59:54.187840 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-03-17 00:59:54.187844 | orchestrator | Tuesday 17 March 2026 00:57:51 +0000 (0:00:00.359) 0:00:46.190 ********* 2026-03-17 00:59:54.187850 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:54.187854 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:54.187858 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:54.187862 | orchestrator | 2026-03-17 00:59:54.187881 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-17 00:59:54.187886 | orchestrator | Tuesday 17 March 2026 00:57:51 +0000 (0:00:00.354) 0:00:46.545 ********* 2026-03-17 00:59:54.187890 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:54.187894 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:54.187897 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-03-17 00:59:54.187901 | orchestrator | 2026-03-17 00:59:54.187905 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-03-17 00:59:54.187908 | orchestrator | Tuesday 17 March 2026 00:57:52 +0000 (0:00:00.349) 0:00:46.895 ********* 2026-03-17 00:59:54.187912 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:59:54.187916 | orchestrator | 2026-03-17 00:59:54.187920 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-03-17 00:59:54.187923 | orchestrator | Tuesday 17 March 2026 00:58:02 +0000 (0:00:09.858) 0:00:56.753 ********* 2026-03-17 00:59:54.187927 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:59:54.187931 | orchestrator | 2026-03-17 00:59:54.187935 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-17 00:59:54.187938 | orchestrator | Tuesday 17 March 2026 00:58:02 +0000 (0:00:00.204) 0:00:56.958 ********* 2026-03-17 00:59:54.187942 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:54.187946 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:54.187950 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:54.187953 | orchestrator | 2026-03-17 00:59:54.187957 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-03-17 00:59:54.187961 | orchestrator | Tuesday 17 March 2026 00:58:02 +0000 (0:00:00.725) 0:00:57.683 ********* 2026-03-17 00:59:54.187964 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:59:54.187968 | orchestrator | 2026-03-17 00:59:54.187972 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-03-17 00:59:54.187976 | orchestrator | Tuesday 17 March 2026 00:58:10 +0000 (0:00:07.838) 0:01:05.522 ********* 2026-03-17 00:59:54.187979 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:59:54.187983 | orchestrator | 2026-03-17 00:59:54.187987 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-03-17 00:59:54.187991 | orchestrator | Tuesday 17 March 2026 00:58:12 +0000 (0:00:01.756) 0:01:07.278 ********* 2026-03-17 00:59:54.187997 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:59:54.188001 | orchestrator | 2026-03-17 00:59:54.188005 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-03-17 00:59:54.188009 | orchestrator | Tuesday 17 March 2026 00:58:15 +0000 (0:00:03.319) 0:01:10.598 ********* 2026-03-17 00:59:54.188012 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:59:54.188016 | orchestrator | 2026-03-17 00:59:54.188020 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-03-17 00:59:54.188024 | orchestrator | Tuesday 17 March 2026 00:58:16 +0000 (0:00:00.432) 0:01:11.031 ********* 2026-03-17 00:59:54.188028 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:54.188032 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:54.188042 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:54.188045 | orchestrator | 2026-03-17 00:59:54.188049 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-03-17 00:59:54.188053 | orchestrator | Tuesday 17 March 2026 00:58:16 +0000 (0:00:00.389) 0:01:11.421 ********* 2026-03-17 00:59:54.188057 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:54.188060 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:59:54.188064 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:59:54.188068 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-03-17 00:59:54.188072 | orchestrator | 2026-03-17 00:59:54.188075 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-03-17 00:59:54.188079 | orchestrator | skipping: no hosts matched 2026-03-17 00:59:54.188083 | orchestrator | 2026-03-17 00:59:54.188087 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-17 00:59:54.188090 | orchestrator | 2026-03-17 00:59:54.188094 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-17 00:59:54.188098 | orchestrator | Tuesday 17 March 2026 00:58:17 +0000 (0:00:00.431) 0:01:11.853 ********* 2026-03-17 00:59:54.188102 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:59:54.188105 | orchestrator | 2026-03-17 00:59:54.188109 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-17 00:59:54.188113 | orchestrator | Tuesday 17 March 2026 00:58:32 +0000 (0:00:15.181) 0:01:27.034 ********* 2026-03-17 00:59:54.188117 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:59:54.188120 | orchestrator | 2026-03-17 00:59:54.188124 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-17 00:59:54.188128 | orchestrator | Tuesday 17 March 2026 00:58:46 +0000 (0:00:14.529) 0:01:41.564 ********* 2026-03-17 00:59:54.188132 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:59:54.188135 | orchestrator | 2026-03-17 00:59:54.188139 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-17 00:59:54.188143 | orchestrator | 2026-03-17 00:59:54.188147 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-17 00:59:54.188150 | orchestrator | Tuesday 17 March 2026 00:58:49 +0000 (0:00:02.502) 0:01:44.066 ********* 2026-03-17 00:59:54.188154 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:59:54.188158 | orchestrator | 2026-03-17 00:59:54.188162 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-17 00:59:54.188165 | orchestrator | Tuesday 17 March 2026 00:59:05 +0000 (0:00:16.414) 0:02:00.481 ********* 2026-03-17 00:59:54.188169 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:59:54.188173 | orchestrator | 2026-03-17 00:59:54.188177 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-17 00:59:54.188181 | orchestrator | Tuesday 17 March 2026 00:59:20 +0000 (0:00:14.821) 0:02:15.303 ********* 2026-03-17 00:59:54.188184 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:59:54.188188 | orchestrator | 2026-03-17 00:59:54.188192 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-03-17 00:59:54.188196 | orchestrator | 2026-03-17 00:59:54.188202 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-17 00:59:54.188206 | orchestrator | Tuesday 17 March 2026 00:59:22 +0000 (0:00:02.268) 0:02:17.571 ********* 2026-03-17 00:59:54.188210 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:59:54.188214 | orchestrator | 2026-03-17 00:59:54.188217 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-17 00:59:54.188221 | orchestrator | Tuesday 17 March 2026 00:59:33 +0000 (0:00:10.672) 0:02:28.244 ********* 2026-03-17 00:59:54.188225 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:59:54.188229 | orchestrator | 2026-03-17 00:59:54.188233 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-17 00:59:54.188236 | orchestrator | Tuesday 17 March 2026 00:59:38 +0000 (0:00:04.611) 0:02:32.856 ********* 2026-03-17 00:59:54.188240 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:59:54.188248 | orchestrator | 2026-03-17 00:59:54.188252 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-03-17 00:59:54.188256 | orchestrator | 2026-03-17 00:59:54.188260 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-03-17 00:59:54.188264 | orchestrator | Tuesday 17 March 2026 00:59:40 +0000 (0:00:02.331) 0:02:35.187 ********* 2026-03-17 00:59:54.188267 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:59:54.188271 | orchestrator | 2026-03-17 00:59:54.188275 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-03-17 00:59:54.188279 | orchestrator | Tuesday 17 March 2026 00:59:41 +0000 (0:00:00.635) 0:02:35.823 ********* 2026-03-17 00:59:54.188283 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:54.188286 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:54.188290 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:59:54.188294 | orchestrator | 2026-03-17 00:59:54.188298 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-03-17 00:59:54.188302 | orchestrator | Tuesday 17 March 2026 00:59:43 +0000 (0:00:02.275) 0:02:38.098 ********* 2026-03-17 00:59:54.188305 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:54.188309 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:54.188313 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:59:54.188317 | orchestrator | 2026-03-17 00:59:54.188320 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-03-17 00:59:54.188324 | orchestrator | Tuesday 17 March 2026 00:59:45 +0000 (0:00:02.292) 0:02:40.391 ********* 2026-03-17 00:59:54.188328 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:54.188334 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:54.188338 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:59:54.188342 | orchestrator | 2026-03-17 00:59:54.188346 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-03-17 00:59:54.188350 | orchestrator | Tuesday 17 March 2026 00:59:48 +0000 (0:00:02.816) 0:02:43.208 ********* 2026-03-17 00:59:54.188353 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:54.188357 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:54.188361 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:59:54.188365 | orchestrator | 2026-03-17 00:59:54.188368 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-03-17 00:59:54.188372 | orchestrator | Tuesday 17 March 2026 00:59:50 +0000 (0:00:02.483) 0:02:45.691 ********* 2026-03-17 00:59:54.188376 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:59:54.188380 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:59:54.188384 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:59:54.188387 | orchestrator | 2026-03-17 00:59:54.188391 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-03-17 00:59:54.188395 | orchestrator | Tuesday 17 March 2026 00:59:53 +0000 (0:00:02.604) 0:02:48.296 ********* 2026-03-17 00:59:54.188399 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:54.188403 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:54.188406 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:54.188410 | orchestrator | 2026-03-17 00:59:54.188414 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:59:54.188418 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-03-17 00:59:54.188422 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2026-03-17 00:59:54.188428 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-03-17 00:59:54.188432 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-03-17 00:59:54.188439 | orchestrator | 2026-03-17 00:59:54.188443 | orchestrator | 2026-03-17 00:59:54.188446 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:59:54.188450 | orchestrator | Tuesday 17 March 2026 00:59:53 +0000 (0:00:00.218) 0:02:48.515 ********* 2026-03-17 00:59:54.188454 | orchestrator | =============================================================================== 2026-03-17 00:59:54.188458 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 31.60s 2026-03-17 00:59:54.188462 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 29.35s 2026-03-17 00:59:54.188465 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 11.04s 2026-03-17 00:59:54.188469 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 10.67s 2026-03-17 00:59:54.188473 | orchestrator | mariadb : Running MariaDB bootstrap container --------------------------- 9.86s 2026-03-17 00:59:54.188477 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 7.84s 2026-03-17 00:59:54.188483 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 4.77s 2026-03-17 00:59:54.188487 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.61s 2026-03-17 00:59:54.188490 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 4.08s 2026-03-17 00:59:54.188494 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 3.96s 2026-03-17 00:59:54.188498 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.93s 2026-03-17 00:59:54.188502 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.66s 2026-03-17 00:59:54.188506 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 3.32s 2026-03-17 00:59:54.188509 | orchestrator | Check MariaDB service --------------------------------------------------- 2.83s 2026-03-17 00:59:54.188513 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 2.82s 2026-03-17 00:59:54.188517 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 2.74s 2026-03-17 00:59:54.188521 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.71s 2026-03-17 00:59:54.188524 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.60s 2026-03-17 00:59:54.188528 | orchestrator | mariadb : Granting permissions on Mariabackup database to backup user --- 2.48s 2026-03-17 00:59:54.188532 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.33s 2026-03-17 00:59:54.189484 | orchestrator | 2026-03-17 00:59:54 | INFO  | Task a29c3e99-e817-4f42-bd0d-49b5c77ba5ae is in state STARTED 2026-03-17 00:59:54.189504 | orchestrator | 2026-03-17 00:59:54 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:59:57.237289 | orchestrator | 2026-03-17 00:59:57 | INFO  | Task f59d8695-f998-4f4e-85cb-e6817cd7cbc1 is in state STARTED 2026-03-17 00:59:57.237378 | orchestrator | 2026-03-17 00:59:57 | INFO  | Task a29c3e99-e817-4f42-bd0d-49b5c77ba5ae is in state STARTED 2026-03-17 00:59:57.239238 | orchestrator | 2026-03-17 00:59:57 | INFO  | Task 4453593d-75f5-459c-84af-4041168d0d65 is in state STARTED 2026-03-17 00:59:57.239331 | orchestrator | 2026-03-17 00:59:57 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:00:00.276183 | orchestrator | 2026-03-17 01:00:00 | INFO  | Task f59d8695-f998-4f4e-85cb-e6817cd7cbc1 is in state STARTED 2026-03-17 01:00:00.277129 | orchestrator | 2026-03-17 01:00:00 | INFO  | Task a29c3e99-e817-4f42-bd0d-49b5c77ba5ae is in state STARTED 2026-03-17 01:00:00.278728 | orchestrator | 2026-03-17 01:00:00 | INFO  | Task 4453593d-75f5-459c-84af-4041168d0d65 is in state STARTED 2026-03-17 01:00:00.278766 | orchestrator | 2026-03-17 01:00:00 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:00:03.320979 | orchestrator | 2026-03-17 01:00:03 | INFO  | Task f59d8695-f998-4f4e-85cb-e6817cd7cbc1 is in state STARTED 2026-03-17 01:00:03.321108 | orchestrator | 2026-03-17 01:00:03 | INFO  | Task a29c3e99-e817-4f42-bd0d-49b5c77ba5ae is in state STARTED 2026-03-17 01:00:03.323295 | orchestrator | 2026-03-17 01:00:03 | INFO  | Task 4453593d-75f5-459c-84af-4041168d0d65 is in state STARTED 2026-03-17 01:00:03.323370 | orchestrator | 2026-03-17 01:00:03 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:00:06.364923 | orchestrator | 2026-03-17 01:00:06 | INFO  | Task f59d8695-f998-4f4e-85cb-e6817cd7cbc1 is in state STARTED 2026-03-17 01:00:06.366611 | orchestrator | 2026-03-17 01:00:06 | INFO  | Task a29c3e99-e817-4f42-bd0d-49b5c77ba5ae is in state STARTED 2026-03-17 01:00:06.366687 | orchestrator | 2026-03-17 01:00:06 | INFO  | Task 4453593d-75f5-459c-84af-4041168d0d65 is in state STARTED 2026-03-17 01:00:06.366697 | orchestrator | 2026-03-17 01:00:06 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:00:09.396707 | orchestrator | 2026-03-17 01:00:09 | INFO  | Task f59d8695-f998-4f4e-85cb-e6817cd7cbc1 is in state STARTED 2026-03-17 01:00:09.397380 | orchestrator | 2026-03-17 01:00:09 | INFO  | Task a29c3e99-e817-4f42-bd0d-49b5c77ba5ae is in state STARTED 2026-03-17 01:00:09.398682 | orchestrator | 2026-03-17 01:00:09 | INFO  | Task 4453593d-75f5-459c-84af-4041168d0d65 is in state STARTED 2026-03-17 01:00:09.398791 | orchestrator | 2026-03-17 01:00:09 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:00:12.442368 | orchestrator | 2026-03-17 01:00:12 | INFO  | Task f59d8695-f998-4f4e-85cb-e6817cd7cbc1 is in state STARTED 2026-03-17 01:00:12.442431 | orchestrator | 2026-03-17 01:00:12 | INFO  | Task a29c3e99-e817-4f42-bd0d-49b5c77ba5ae is in state STARTED 2026-03-17 01:00:12.443356 | orchestrator | 2026-03-17 01:00:12 | INFO  | Task 4453593d-75f5-459c-84af-4041168d0d65 is in state STARTED 2026-03-17 01:00:12.443391 | orchestrator | 2026-03-17 01:00:12 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:00:15.486307 | orchestrator | 2026-03-17 01:00:15 | INFO  | Task f59d8695-f998-4f4e-85cb-e6817cd7cbc1 is in state STARTED 2026-03-17 01:00:15.490687 | orchestrator | 2026-03-17 01:00:15 | INFO  | Task a29c3e99-e817-4f42-bd0d-49b5c77ba5ae is in state STARTED 2026-03-17 01:00:15.493682 | orchestrator | 2026-03-17 01:00:15 | INFO  | Task 4453593d-75f5-459c-84af-4041168d0d65 is in state STARTED 2026-03-17 01:00:15.494523 | orchestrator | 2026-03-17 01:00:15 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:00:18.528140 | orchestrator | 2026-03-17 01:00:18 | INFO  | Task f59d8695-f998-4f4e-85cb-e6817cd7cbc1 is in state STARTED 2026-03-17 01:00:18.528411 | orchestrator | 2026-03-17 01:00:18 | INFO  | Task a29c3e99-e817-4f42-bd0d-49b5c77ba5ae is in state STARTED 2026-03-17 01:00:18.529954 | orchestrator | 2026-03-17 01:00:18 | INFO  | Task 4453593d-75f5-459c-84af-4041168d0d65 is in state STARTED 2026-03-17 01:00:18.530008 | orchestrator | 2026-03-17 01:00:18 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:00:21.572797 | orchestrator | 2026-03-17 01:00:21 | INFO  | Task f59d8695-f998-4f4e-85cb-e6817cd7cbc1 is in state STARTED 2026-03-17 01:00:21.574946 | orchestrator | 2026-03-17 01:00:21 | INFO  | Task a29c3e99-e817-4f42-bd0d-49b5c77ba5ae is in state STARTED 2026-03-17 01:00:21.576778 | orchestrator | 2026-03-17 01:00:21 | INFO  | Task 4453593d-75f5-459c-84af-4041168d0d65 is in state STARTED 2026-03-17 01:00:21.576828 | orchestrator | 2026-03-17 01:00:21 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:00:24.618042 | orchestrator | 2026-03-17 01:00:24 | INFO  | Task f59d8695-f998-4f4e-85cb-e6817cd7cbc1 is in state STARTED 2026-03-17 01:00:24.619283 | orchestrator | 2026-03-17 01:00:24 | INFO  | Task a29c3e99-e817-4f42-bd0d-49b5c77ba5ae is in state STARTED 2026-03-17 01:00:24.620943 | orchestrator | 2026-03-17 01:00:24 | INFO  | Task 4453593d-75f5-459c-84af-4041168d0d65 is in state STARTED 2026-03-17 01:00:24.621203 | orchestrator | 2026-03-17 01:00:24 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:00:27.658992 | orchestrator | 2026-03-17 01:00:27 | INFO  | Task f59d8695-f998-4f4e-85cb-e6817cd7cbc1 is in state STARTED 2026-03-17 01:00:27.659373 | orchestrator | 2026-03-17 01:00:27 | INFO  | Task a29c3e99-e817-4f42-bd0d-49b5c77ba5ae is in state STARTED 2026-03-17 01:00:27.661576 | orchestrator | 2026-03-17 01:00:27 | INFO  | Task 4453593d-75f5-459c-84af-4041168d0d65 is in state STARTED 2026-03-17 01:00:27.661612 | orchestrator | 2026-03-17 01:00:27 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:00:30.707154 | orchestrator | 2026-03-17 01:00:30 | INFO  | Task f59d8695-f998-4f4e-85cb-e6817cd7cbc1 is in state STARTED 2026-03-17 01:00:30.708999 | orchestrator | 2026-03-17 01:00:30 | INFO  | Task a29c3e99-e817-4f42-bd0d-49b5c77ba5ae is in state STARTED 2026-03-17 01:00:30.710233 | orchestrator | 2026-03-17 01:00:30 | INFO  | Task 4453593d-75f5-459c-84af-4041168d0d65 is in state STARTED 2026-03-17 01:00:30.710270 | orchestrator | 2026-03-17 01:00:30 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:00:33.746005 | orchestrator | 2026-03-17 01:00:33 | INFO  | Task f59d8695-f998-4f4e-85cb-e6817cd7cbc1 is in state STARTED 2026-03-17 01:00:33.747058 | orchestrator | 2026-03-17 01:00:33 | INFO  | Task a29c3e99-e817-4f42-bd0d-49b5c77ba5ae is in state STARTED 2026-03-17 01:00:33.749121 | orchestrator | 2026-03-17 01:00:33 | INFO  | Task 4453593d-75f5-459c-84af-4041168d0d65 is in state STARTED 2026-03-17 01:00:33.749168 | orchestrator | 2026-03-17 01:00:33 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:00:36.801137 | orchestrator | 2026-03-17 01:00:36 | INFO  | Task f59d8695-f998-4f4e-85cb-e6817cd7cbc1 is in state STARTED 2026-03-17 01:00:36.802811 | orchestrator | 2026-03-17 01:00:36 | INFO  | Task a29c3e99-e817-4f42-bd0d-49b5c77ba5ae is in state STARTED 2026-03-17 01:00:36.804304 | orchestrator | 2026-03-17 01:00:36 | INFO  | Task 4453593d-75f5-459c-84af-4041168d0d65 is in state STARTED 2026-03-17 01:00:36.804388 | orchestrator | 2026-03-17 01:00:36 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:00:39.846198 | orchestrator | 2026-03-17 01:00:39 | INFO  | Task f59d8695-f998-4f4e-85cb-e6817cd7cbc1 is in state STARTED 2026-03-17 01:00:39.846703 | orchestrator | 2026-03-17 01:00:39 | INFO  | Task a29c3e99-e817-4f42-bd0d-49b5c77ba5ae is in state STARTED 2026-03-17 01:00:39.847678 | orchestrator | 2026-03-17 01:00:39 | INFO  | Task 4453593d-75f5-459c-84af-4041168d0d65 is in state STARTED 2026-03-17 01:00:39.847707 | orchestrator | 2026-03-17 01:00:39 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:00:42.896337 | orchestrator | 2026-03-17 01:00:42 | INFO  | Task f59d8695-f998-4f4e-85cb-e6817cd7cbc1 is in state STARTED 2026-03-17 01:00:42.897143 | orchestrator | 2026-03-17 01:00:42 | INFO  | Task a29c3e99-e817-4f42-bd0d-49b5c77ba5ae is in state STARTED 2026-03-17 01:00:42.899307 | orchestrator | 2026-03-17 01:00:42 | INFO  | Task 4453593d-75f5-459c-84af-4041168d0d65 is in state STARTED 2026-03-17 01:00:42.899356 | orchestrator | 2026-03-17 01:00:42 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:00:45.941391 | orchestrator | 2026-03-17 01:00:45 | INFO  | Task f59d8695-f998-4f4e-85cb-e6817cd7cbc1 is in state STARTED 2026-03-17 01:00:45.942208 | orchestrator | 2026-03-17 01:00:45 | INFO  | Task a29c3e99-e817-4f42-bd0d-49b5c77ba5ae is in state STARTED 2026-03-17 01:00:45.944339 | orchestrator | 2026-03-17 01:00:45 | INFO  | Task 4453593d-75f5-459c-84af-4041168d0d65 is in state STARTED 2026-03-17 01:00:45.944392 | orchestrator | 2026-03-17 01:00:45 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:00:49.002764 | orchestrator | 2026-03-17 01:00:49 | INFO  | Task f59d8695-f998-4f4e-85cb-e6817cd7cbc1 is in state STARTED 2026-03-17 01:00:49.002874 | orchestrator | 2026-03-17 01:00:49 | INFO  | Task a29c3e99-e817-4f42-bd0d-49b5c77ba5ae is in state STARTED 2026-03-17 01:00:49.002886 | orchestrator | 2026-03-17 01:00:49 | INFO  | Task 4453593d-75f5-459c-84af-4041168d0d65 is in state STARTED 2026-03-17 01:00:49.003122 | orchestrator | 2026-03-17 01:00:49 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:00:52.047394 | orchestrator | 2026-03-17 01:00:52 | INFO  | Task f59d8695-f998-4f4e-85cb-e6817cd7cbc1 is in state STARTED 2026-03-17 01:00:52.049528 | orchestrator | 2026-03-17 01:00:52 | INFO  | Task a29c3e99-e817-4f42-bd0d-49b5c77ba5ae is in state STARTED 2026-03-17 01:00:52.053033 | orchestrator | 2026-03-17 01:00:52 | INFO  | Task 4453593d-75f5-459c-84af-4041168d0d65 is in state STARTED 2026-03-17 01:00:52.053098 | orchestrator | 2026-03-17 01:00:52 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:00:55.100176 | orchestrator | 2026-03-17 01:00:55 | INFO  | Task f59d8695-f998-4f4e-85cb-e6817cd7cbc1 is in state STARTED 2026-03-17 01:00:55.101602 | orchestrator | 2026-03-17 01:00:55 | INFO  | Task a29c3e99-e817-4f42-bd0d-49b5c77ba5ae is in state STARTED 2026-03-17 01:00:55.103013 | orchestrator | 2026-03-17 01:00:55 | INFO  | Task 4453593d-75f5-459c-84af-4041168d0d65 is in state STARTED 2026-03-17 01:00:55.103390 | orchestrator | 2026-03-17 01:00:55 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:00:58.149792 | orchestrator | 2026-03-17 01:00:58 | INFO  | Task f59d8695-f998-4f4e-85cb-e6817cd7cbc1 is in state STARTED 2026-03-17 01:00:58.151412 | orchestrator | 2026-03-17 01:00:58 | INFO  | Task a29c3e99-e817-4f42-bd0d-49b5c77ba5ae is in state STARTED 2026-03-17 01:00:58.153470 | orchestrator | 2026-03-17 01:00:58 | INFO  | Task 4453593d-75f5-459c-84af-4041168d0d65 is in state STARTED 2026-03-17 01:00:58.153535 | orchestrator | 2026-03-17 01:00:58 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:01:01.189573 | orchestrator | 2026-03-17 01:01:01 | INFO  | Task f59d8695-f998-4f4e-85cb-e6817cd7cbc1 is in state STARTED 2026-03-17 01:01:01.191089 | orchestrator | 2026-03-17 01:01:01 | INFO  | Task a29c3e99-e817-4f42-bd0d-49b5c77ba5ae is in state STARTED 2026-03-17 01:01:01.192581 | orchestrator | 2026-03-17 01:01:01 | INFO  | Task 4453593d-75f5-459c-84af-4041168d0d65 is in state STARTED 2026-03-17 01:01:01.193035 | orchestrator | 2026-03-17 01:01:01 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:01:04.233742 | orchestrator | 2026-03-17 01:01:04 | INFO  | Task f59d8695-f998-4f4e-85cb-e6817cd7cbc1 is in state STARTED 2026-03-17 01:01:04.235605 | orchestrator | 2026-03-17 01:01:04 | INFO  | Task a29c3e99-e817-4f42-bd0d-49b5c77ba5ae is in state STARTED 2026-03-17 01:01:04.237371 | orchestrator | 2026-03-17 01:01:04 | INFO  | Task 4453593d-75f5-459c-84af-4041168d0d65 is in state STARTED 2026-03-17 01:01:04.237531 | orchestrator | 2026-03-17 01:01:04 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:01:07.279768 | orchestrator | 2026-03-17 01:01:07 | INFO  | Task f59d8695-f998-4f4e-85cb-e6817cd7cbc1 is in state STARTED 2026-03-17 01:01:07.281693 | orchestrator | 2026-03-17 01:01:07 | INFO  | Task a29c3e99-e817-4f42-bd0d-49b5c77ba5ae is in state STARTED 2026-03-17 01:01:07.284227 | orchestrator | 2026-03-17 01:01:07 | INFO  | Task 4453593d-75f5-459c-84af-4041168d0d65 is in state STARTED 2026-03-17 01:01:07.284327 | orchestrator | 2026-03-17 01:01:07 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:01:10.325914 | orchestrator | 2026-03-17 01:01:10 | INFO  | Task f59d8695-f998-4f4e-85cb-e6817cd7cbc1 is in state STARTED 2026-03-17 01:01:10.326207 | orchestrator | 2026-03-17 01:01:10 | INFO  | Task a29c3e99-e817-4f42-bd0d-49b5c77ba5ae is in state STARTED 2026-03-17 01:01:10.327885 | orchestrator | 2026-03-17 01:01:10 | INFO  | Task 4453593d-75f5-459c-84af-4041168d0d65 is in state STARTED 2026-03-17 01:01:10.327941 | orchestrator | 2026-03-17 01:01:10 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:01:13.367634 | orchestrator | 2026-03-17 01:01:13 | INFO  | Task f59d8695-f998-4f4e-85cb-e6817cd7cbc1 is in state STARTED 2026-03-17 01:01:13.369721 | orchestrator | 2026-03-17 01:01:13 | INFO  | Task a29c3e99-e817-4f42-bd0d-49b5c77ba5ae is in state STARTED 2026-03-17 01:01:13.371604 | orchestrator | 2026-03-17 01:01:13 | INFO  | Task 4453593d-75f5-459c-84af-4041168d0d65 is in state STARTED 2026-03-17 01:01:13.371745 | orchestrator | 2026-03-17 01:01:13 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:01:16.417055 | orchestrator | 2026-03-17 01:01:16 | INFO  | Task f59d8695-f998-4f4e-85cb-e6817cd7cbc1 is in state STARTED 2026-03-17 01:01:16.419932 | orchestrator | 2026-03-17 01:01:16 | INFO  | Task a29c3e99-e817-4f42-bd0d-49b5c77ba5ae is in state STARTED 2026-03-17 01:01:16.421857 | orchestrator | 2026-03-17 01:01:16 | INFO  | Task 4453593d-75f5-459c-84af-4041168d0d65 is in state STARTED 2026-03-17 01:01:16.421896 | orchestrator | 2026-03-17 01:01:16 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:01:19.474126 | orchestrator | 2026-03-17 01:01:19 | INFO  | Task f59d8695-f998-4f4e-85cb-e6817cd7cbc1 is in state STARTED 2026-03-17 01:01:19.475208 | orchestrator | 2026-03-17 01:01:19 | INFO  | Task a29c3e99-e817-4f42-bd0d-49b5c77ba5ae is in state SUCCESS 2026-03-17 01:01:19.478728 | orchestrator | 2026-03-17 01:01:19.478765 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-17 01:01:19.478769 | orchestrator | 2.16.14 2026-03-17 01:01:19.478774 | orchestrator | 2026-03-17 01:01:19.478784 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-03-17 01:01:19.478820 | orchestrator | 2026-03-17 01:01:19.478824 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-17 01:01:19.478827 | orchestrator | Tuesday 17 March 2026 00:59:12 +0000 (0:00:00.542) 0:00:00.542 ********* 2026-03-17 01:01:19.478831 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 01:01:19.478835 | orchestrator | 2026-03-17 01:01:19.478838 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-17 01:01:19.478841 | orchestrator | Tuesday 17 March 2026 00:59:13 +0000 (0:00:00.613) 0:00:01.156 ********* 2026-03-17 01:01:19.478844 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:01:19.478848 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:01:19.478851 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:01:19.478854 | orchestrator | 2026-03-17 01:01:19.478857 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-17 01:01:19.478861 | orchestrator | Tuesday 17 March 2026 00:59:14 +0000 (0:00:00.904) 0:00:02.061 ********* 2026-03-17 01:01:19.478864 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:01:19.478867 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:01:19.478870 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:01:19.478873 | orchestrator | 2026-03-17 01:01:19.478876 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-17 01:01:19.478880 | orchestrator | Tuesday 17 March 2026 00:59:14 +0000 (0:00:00.268) 0:00:02.329 ********* 2026-03-17 01:01:19.478883 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:01:19.478896 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:01:19.478899 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:01:19.478902 | orchestrator | 2026-03-17 01:01:19.478905 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-17 01:01:19.478909 | orchestrator | Tuesday 17 March 2026 00:59:15 +0000 (0:00:00.784) 0:00:03.113 ********* 2026-03-17 01:01:19.478912 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:01:19.478915 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:01:19.478918 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:01:19.478921 | orchestrator | 2026-03-17 01:01:19.478924 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-17 01:01:19.478927 | orchestrator | Tuesday 17 March 2026 00:59:15 +0000 (0:00:00.303) 0:00:03.417 ********* 2026-03-17 01:01:19.478930 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:01:19.478933 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:01:19.478990 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:01:19.478996 | orchestrator | 2026-03-17 01:01:19.478999 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-17 01:01:19.479003 | orchestrator | Tuesday 17 March 2026 00:59:16 +0000 (0:00:00.269) 0:00:03.686 ********* 2026-03-17 01:01:19.479006 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:01:19.479009 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:01:19.479012 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:01:19.479015 | orchestrator | 2026-03-17 01:01:19.479018 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-17 01:01:19.479021 | orchestrator | Tuesday 17 March 2026 00:59:16 +0000 (0:00:00.298) 0:00:03.985 ********* 2026-03-17 01:01:19.479024 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:01:19.479028 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:01:19.479031 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:01:19.479034 | orchestrator | 2026-03-17 01:01:19.479037 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-17 01:01:19.479040 | orchestrator | Tuesday 17 March 2026 00:59:16 +0000 (0:00:00.468) 0:00:04.453 ********* 2026-03-17 01:01:19.479043 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:01:19.479046 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:01:19.479049 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:01:19.479052 | orchestrator | 2026-03-17 01:01:19.479056 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-17 01:01:19.479059 | orchestrator | Tuesday 17 March 2026 00:59:17 +0000 (0:00:00.301) 0:00:04.754 ********* 2026-03-17 01:01:19.479062 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-17 01:01:19.479065 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-17 01:01:19.479068 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-17 01:01:19.479071 | orchestrator | 2026-03-17 01:01:19.479074 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-17 01:01:19.479077 | orchestrator | Tuesday 17 March 2026 00:59:17 +0000 (0:00:00.629) 0:00:05.384 ********* 2026-03-17 01:01:19.479081 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:01:19.479084 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:01:19.479087 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:01:19.479090 | orchestrator | 2026-03-17 01:01:19.479093 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-17 01:01:19.479096 | orchestrator | Tuesday 17 March 2026 00:59:18 +0000 (0:00:00.444) 0:00:05.829 ********* 2026-03-17 01:01:19.479099 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-17 01:01:19.479102 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-17 01:01:19.479105 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-17 01:01:19.479108 | orchestrator | 2026-03-17 01:01:19.479111 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-17 01:01:19.479217 | orchestrator | Tuesday 17 March 2026 00:59:21 +0000 (0:00:02.842) 0:00:08.672 ********* 2026-03-17 01:01:19.479221 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-17 01:01:19.479225 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-17 01:01:19.479228 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-17 01:01:19.479232 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:01:19.479235 | orchestrator | 2026-03-17 01:01:19.479245 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-17 01:01:19.479251 | orchestrator | Tuesday 17 March 2026 00:59:21 +0000 (0:00:00.391) 0:00:09.063 ********* 2026-03-17 01:01:19.479255 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-17 01:01:19.479260 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-17 01:01:19.479263 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-17 01:01:19.479266 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:01:19.479270 | orchestrator | 2026-03-17 01:01:19.479273 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-17 01:01:19.479276 | orchestrator | Tuesday 17 March 2026 00:59:22 +0000 (0:00:00.808) 0:00:09.872 ********* 2026-03-17 01:01:19.479280 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-17 01:01:19.479284 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-17 01:01:19.479287 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-17 01:01:19.479290 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:01:19.479294 | orchestrator | 2026-03-17 01:01:19.479297 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-17 01:01:19.479300 | orchestrator | Tuesday 17 March 2026 00:59:22 +0000 (0:00:00.151) 0:00:10.024 ********* 2026-03-17 01:01:19.479304 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '5233ad2e666c', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-17 00:59:19.179941', 'end': '2026-03-17 00:59:19.213628', 'delta': '0:00:00.033687', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['5233ad2e666c'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-17 01:01:19.479311 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '47d03ae1bb68', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-17 00:59:20.157493', 'end': '2026-03-17 00:59:20.181249', 'delta': '0:00:00.023756', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['47d03ae1bb68'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-17 01:01:19.479319 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '9d4d02e1a476', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-17 00:59:20.959846', 'end': '2026-03-17 00:59:20.987947', 'delta': '0:00:00.028101', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9d4d02e1a476'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-17 01:01:19.479323 | orchestrator | 2026-03-17 01:01:19.479326 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-17 01:01:19.479329 | orchestrator | Tuesday 17 March 2026 00:59:22 +0000 (0:00:00.374) 0:00:10.398 ********* 2026-03-17 01:01:19.479332 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:01:19.479336 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:01:19.479339 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:01:19.479342 | orchestrator | 2026-03-17 01:01:19.479345 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-17 01:01:19.479348 | orchestrator | Tuesday 17 March 2026 00:59:23 +0000 (0:00:00.408) 0:00:10.807 ********* 2026-03-17 01:01:19.479351 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-03-17 01:01:19.479354 | orchestrator | 2026-03-17 01:01:19.479357 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-17 01:01:19.479360 | orchestrator | Tuesday 17 March 2026 00:59:25 +0000 (0:00:02.038) 0:00:12.846 ********* 2026-03-17 01:01:19.479363 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:01:19.479366 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:01:19.479370 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:01:19.479375 | orchestrator | 2026-03-17 01:01:19.479383 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-17 01:01:19.479390 | orchestrator | Tuesday 17 March 2026 00:59:25 +0000 (0:00:00.265) 0:00:13.112 ********* 2026-03-17 01:01:19.479395 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:01:19.479401 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:01:19.479406 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:01:19.479411 | orchestrator | 2026-03-17 01:01:19.479416 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-17 01:01:19.479422 | orchestrator | Tuesday 17 March 2026 00:59:25 +0000 (0:00:00.351) 0:00:13.463 ********* 2026-03-17 01:01:19.479427 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:01:19.479433 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:01:19.479438 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:01:19.479443 | orchestrator | 2026-03-17 01:01:19.479448 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-17 01:01:19.479457 | orchestrator | Tuesday 17 March 2026 00:59:26 +0000 (0:00:00.371) 0:00:13.834 ********* 2026-03-17 01:01:19.479463 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:01:19.479467 | orchestrator | 2026-03-17 01:01:19.479470 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-17 01:01:19.479474 | orchestrator | Tuesday 17 March 2026 00:59:26 +0000 (0:00:00.119) 0:00:13.954 ********* 2026-03-17 01:01:19.479477 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:01:19.479480 | orchestrator | 2026-03-17 01:01:19.479483 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-17 01:01:19.479486 | orchestrator | Tuesday 17 March 2026 00:59:26 +0000 (0:00:00.198) 0:00:14.153 ********* 2026-03-17 01:01:19.479489 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:01:19.479492 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:01:19.479495 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:01:19.479498 | orchestrator | 2026-03-17 01:01:19.479501 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-17 01:01:19.479504 | orchestrator | Tuesday 17 March 2026 00:59:26 +0000 (0:00:00.238) 0:00:14.391 ********* 2026-03-17 01:01:19.479507 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:01:19.479510 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:01:19.479513 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:01:19.479516 | orchestrator | 2026-03-17 01:01:19.479519 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-17 01:01:19.479522 | orchestrator | Tuesday 17 March 2026 00:59:27 +0000 (0:00:00.268) 0:00:14.660 ********* 2026-03-17 01:01:19.479525 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:01:19.479528 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:01:19.479531 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:01:19.479535 | orchestrator | 2026-03-17 01:01:19.479538 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-17 01:01:19.479541 | orchestrator | Tuesday 17 March 2026 00:59:27 +0000 (0:00:00.367) 0:00:15.028 ********* 2026-03-17 01:01:19.479544 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:01:19.479547 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:01:19.479550 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:01:19.479553 | orchestrator | 2026-03-17 01:01:19.479556 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-17 01:01:19.479559 | orchestrator | Tuesday 17 March 2026 00:59:27 +0000 (0:00:00.263) 0:00:15.292 ********* 2026-03-17 01:01:19.479562 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:01:19.479565 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:01:19.479568 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:01:19.479571 | orchestrator | 2026-03-17 01:01:19.479574 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-17 01:01:19.479577 | orchestrator | Tuesday 17 March 2026 00:59:28 +0000 (0:00:00.268) 0:00:15.560 ********* 2026-03-17 01:01:19.479580 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:01:19.479583 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:01:19.479586 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:01:19.479592 | orchestrator | 2026-03-17 01:01:19.479595 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-17 01:01:19.479601 | orchestrator | Tuesday 17 March 2026 00:59:28 +0000 (0:00:00.291) 0:00:15.852 ********* 2026-03-17 01:01:19.479604 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:01:19.479607 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:01:19.479610 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:01:19.479613 | orchestrator | 2026-03-17 01:01:19.479616 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-17 01:01:19.479619 | orchestrator | Tuesday 17 March 2026 00:59:28 +0000 (0:00:00.373) 0:00:16.225 ********* 2026-03-17 01:01:19.479623 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--45fdc78c--b598--5156--b36d--ba4cd7c12386-osd--block--45fdc78c--b598--5156--b36d--ba4cd7c12386', 'dm-uuid-LVM-6timofDkKT1hbgs1UiLHgm8I9lC3wjGeUFlOGZuZIlCxkSeT3VIDJBOooO84jJ4W'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-17 01:01:19.479639 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2b5d6da3--626f--5c09--a421--20ac1510e3d2-osd--block--2b5d6da3--626f--5c09--a421--20ac1510e3d2', 'dm-uuid-LVM-1aQ8jNKmNVPuSUkhlXwYUGiDvucOc05Mj89XXHB4DeQUyYPBdPYze9NHoCjcBTMD'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-17 01:01:19.479643 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:01:19.479647 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:01:19.479650 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:01:19.479654 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:01:19.479657 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:01:19.479663 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:01:19.479668 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:01:19.479677 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:01:19.479682 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fef12aab-9308-4371-8ae3-fd48e064f393', 'scsi-SQEMU_QEMU_HARDDISK_fef12aab-9308-4371-8ae3-fd48e064f393'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fef12aab-9308-4371-8ae3-fd48e064f393-part1', 'scsi-SQEMU_QEMU_HARDDISK_fef12aab-9308-4371-8ae3-fd48e064f393-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fef12aab-9308-4371-8ae3-fd48e064f393-part14', 'scsi-SQEMU_QEMU_HARDDISK_fef12aab-9308-4371-8ae3-fd48e064f393-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fef12aab-9308-4371-8ae3-fd48e064f393-part15', 'scsi-SQEMU_QEMU_HARDDISK_fef12aab-9308-4371-8ae3-fd48e064f393-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fef12aab-9308-4371-8ae3-fd48e064f393-part16', 'scsi-SQEMU_QEMU_HARDDISK_fef12aab-9308-4371-8ae3-fd48e064f393-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 01:01:19.479687 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--45fdc78c--b598--5156--b36d--ba4cd7c12386-osd--block--45fdc78c--b598--5156--b36d--ba4cd7c12386'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-5NLxCA-3OmV-UzBj-h29u-hGxB-8QDS-1x2KeN', 'scsi-0QEMU_QEMU_HARDDISK_f65971dd-3d8e-4ccb-8892-9cef1457b08b', 'scsi-SQEMU_QEMU_HARDDISK_f65971dd-3d8e-4ccb-8892-9cef1457b08b'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 01:01:19.479695 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--dc88f193--a403--571c--9716--867079cb0a77-osd--block--dc88f193--a403--571c--9716--867079cb0a77', 'dm-uuid-LVM-6cbvHb49d19dye5SGAJdR4tSnVL1sn3e3VYpoopj1ggoGa3fsfEBFRYtjRY42Zwu'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-17 01:01:19.479850 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--2b5d6da3--626f--5c09--a421--20ac1510e3d2-osd--block--2b5d6da3--626f--5c09--a421--20ac1510e3d2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Cfj8Wt-Pc6p-KnzT-OxB4-bn7U-Wz17-huOjFT', 'scsi-0QEMU_QEMU_HARDDISK_8140ca94-7747-4c81-b89b-0d83b2f23451', 'scsi-SQEMU_QEMU_HARDDISK_8140ca94-7747-4c81-b89b-0d83b2f23451'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 01:01:19.479858 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9e905ad0--9805--5328--aec5--92944dddbd57-osd--block--9e905ad0--9805--5328--aec5--92944dddbd57', 'dm-uuid-LVM-KRraYFqQ9BlELNTrII6HMgV69ppufHP8w3fspEp9JIWjJz6vi7Do1Q3YTQrhsZKv'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-17 01:01:19.479861 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1afbae95-f964-4c90-9c71-9e7629ff9c63', 'scsi-SQEMU_QEMU_HARDDISK_1afbae95-f964-4c90-9c71-9e7629ff9c63'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 01:01:19.479865 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:01:19.479869 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-17-00-03-36-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 01:01:19.479872 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:01:19.479887 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:01:19.479896 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:01:19.479900 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:01:19.479903 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:01:19.479906 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:01:19.479909 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:01:19.479912 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:01:19.479921 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e08813e-a36b-44d4-8c45-37d944b877b3', 'scsi-SQEMU_QEMU_HARDDISK_6e08813e-a36b-44d4-8c45-37d944b877b3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e08813e-a36b-44d4-8c45-37d944b877b3-part1', 'scsi-SQEMU_QEMU_HARDDISK_6e08813e-a36b-44d4-8c45-37d944b877b3-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e08813e-a36b-44d4-8c45-37d944b877b3-part14', 'scsi-SQEMU_QEMU_HARDDISK_6e08813e-a36b-44d4-8c45-37d944b877b3-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e08813e-a36b-44d4-8c45-37d944b877b3-part15', 'scsi-SQEMU_QEMU_HARDDISK_6e08813e-a36b-44d4-8c45-37d944b877b3-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e08813e-a36b-44d4-8c45-37d944b877b3-part16', 'scsi-SQEMU_QEMU_HARDDISK_6e08813e-a36b-44d4-8c45-37d944b877b3-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 01:01:19.479928 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--dc88f193--a403--571c--9716--867079cb0a77-osd--block--dc88f193--a403--571c--9716--867079cb0a77'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ZZigmv-scWw-dS3h-Kt7R-sNr1-R177-KHwZlS', 'scsi-0QEMU_QEMU_HARDDISK_83f9c1ee-a593-4773-9f19-cdbbc5179b15', 'scsi-SQEMU_QEMU_HARDDISK_83f9c1ee-a593-4773-9f19-cdbbc5179b15'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 01:01:19.479931 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3c41c00e--01b2--5de9--9d7e--31888b7f9771-osd--block--3c41c00e--01b2--5de9--9d7e--31888b7f9771', 'dm-uuid-LVM-ysKBsJ06fJzf8zFG7udyrvhcFSNLKkXdy6bdG43Y2KDBxreI5nZ2cT5mbzpD8z3f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-17 01:01:19.479934 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--9e905ad0--9805--5328--aec5--92944dddbd57-osd--block--9e905ad0--9805--5328--aec5--92944dddbd57'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-GfGyPa-pmmX-8xWI-43WZ-LBYv-bkS1-Kty7h3', 'scsi-0QEMU_QEMU_HARDDISK_a8e3ed1c-2f99-41d3-ad10-61535a4cd08c', 'scsi-SQEMU_QEMU_HARDDISK_a8e3ed1c-2f99-41d3-ad10-61535a4cd08c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 01:01:19.479938 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b1b21aa2--16de--5cd3--9497--37bc0f66c5a5-osd--block--b1b21aa2--16de--5cd3--9497--37bc0f66c5a5', 'dm-uuid-LVM-iBbCl8l2Cp21TLsab0LZb9UpEOZERcnCml6xqtK2mvkLdOPZUie3k4LRW6n3GT3Z'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-17 01:01:19.479941 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_89f9da0d-6b93-4417-9f39-e48f14dc47e8', 'scsi-SQEMU_QEMU_HARDDISK_89f9da0d-6b93-4417-9f39-e48f14dc47e8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 01:01:19.479947 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:01:19.479954 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-17-00-03-09-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 01:01:19.479957 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:01:19.479961 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:01:19.479964 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:01:19.479967 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:01:19.479970 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:01:19.479973 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:01:19.479977 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:01:19.479980 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:01:19.479997 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dfd38aa9-0273-4d0b-842d-83e2b920901d', 'scsi-SQEMU_QEMU_HARDDISK_dfd38aa9-0273-4d0b-842d-83e2b920901d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dfd38aa9-0273-4d0b-842d-83e2b920901d-part1', 'scsi-SQEMU_QEMU_HARDDISK_dfd38aa9-0273-4d0b-842d-83e2b920901d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dfd38aa9-0273-4d0b-842d-83e2b920901d-part14', 'scsi-SQEMU_QEMU_HARDDISK_dfd38aa9-0273-4d0b-842d-83e2b920901d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dfd38aa9-0273-4d0b-842d-83e2b920901d-part15', 'scsi-SQEMU_QEMU_HARDDISK_dfd38aa9-0273-4d0b-842d-83e2b920901d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dfd38aa9-0273-4d0b-842d-83e2b920901d-part16', 'scsi-SQEMU_QEMU_HARDDISK_dfd38aa9-0273-4d0b-842d-83e2b920901d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 01:01:19.480000 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--3c41c00e--01b2--5de9--9d7e--31888b7f9771-osd--block--3c41c00e--01b2--5de9--9d7e--31888b7f9771'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-bK2hhS-QERv-lWih-5lqM-caSH-wXWc-LdaWOA', 'scsi-0QEMU_QEMU_HARDDISK_304f2e06-033e-4696-8bcf-5d7e9425b0ee', 'scsi-SQEMU_QEMU_HARDDISK_304f2e06-033e-4696-8bcf-5d7e9425b0ee'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 01:01:19.480004 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--b1b21aa2--16de--5cd3--9497--37bc0f66c5a5-osd--block--b1b21aa2--16de--5cd3--9497--37bc0f66c5a5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-CF7sTO-CKEf-1owX-tMHc-nUDz-6un9-8O9zaO', 'scsi-0QEMU_QEMU_HARDDISK_d33e80f7-c5e3-468e-989c-76b1c28adee9', 'scsi-SQEMU_QEMU_HARDDISK_d33e80f7-c5e3-468e-989c-76b1c28adee9'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 01:01:19.480007 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fe0d5661-edac-468e-9d1d-014c3e419a65', 'scsi-SQEMU_QEMU_HARDDISK_fe0d5661-edac-468e-9d1d-014c3e419a65'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 01:01:19.480017 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-17-00-03-45-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 01:01:19.480027 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:01:19.480030 | orchestrator | 2026-03-17 01:01:19.480058 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-17 01:01:19.480061 | orchestrator | Tuesday 17 March 2026 00:59:29 +0000 (0:00:00.546) 0:00:16.772 ********* 2026-03-17 01:01:19.480065 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--45fdc78c--b598--5156--b36d--ba4cd7c12386-osd--block--45fdc78c--b598--5156--b36d--ba4cd7c12386', 'dm-uuid-LVM-6timofDkKT1hbgs1UiLHgm8I9lC3wjGeUFlOGZuZIlCxkSeT3VIDJBOooO84jJ4W'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:01:19.480069 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2b5d6da3--626f--5c09--a421--20ac1510e3d2-osd--block--2b5d6da3--626f--5c09--a421--20ac1510e3d2', 'dm-uuid-LVM-1aQ8jNKmNVPuSUkhlXwYUGiDvucOc05Mj89XXHB4DeQUyYPBdPYze9NHoCjcBTMD'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:01:19.480072 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:01:19.480076 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:01:19.480081 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:01:19.480090 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:01:19.480093 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--dc88f193--a403--571c--9716--867079cb0a77-osd--block--dc88f193--a403--571c--9716--867079cb0a77', 'dm-uuid-LVM-6cbvHb49d19dye5SGAJdR4tSnVL1sn3e3VYpoopj1ggoGa3fsfEBFRYtjRY42Zwu'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:01:19.480096 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:01:19.480100 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:01:19.480103 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9e905ad0--9805--5328--aec5--92944dddbd57-osd--block--9e905ad0--9805--5328--aec5--92944dddbd57', 'dm-uuid-LVM-KRraYFqQ9BlELNTrII6HMgV69ppufHP8w3fspEp9JIWjJz6vi7Do1Q3YTQrhsZKv'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:01:19.480111 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:01:19.480123 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:01:19.480131 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:01:19.480136 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fef12aab-9308-4371-8ae3-fd48e064f393', 'scsi-SQEMU_QEMU_HARDDISK_fef12aab-9308-4371-8ae3-fd48e064f393'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fef12aab-9308-4371-8ae3-fd48e064f393-part1', 'scsi-SQEMU_QEMU_HARDDISK_fef12aab-9308-4371-8ae3-fd48e064f393-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fef12aab-9308-4371-8ae3-fd48e064f393-part14', 'scsi-SQEMU_QEMU_HARDDISK_fef12aab-9308-4371-8ae3-fd48e064f393-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fef12aab-9308-4371-8ae3-fd48e064f393-part15', 'scsi-SQEMU_QEMU_HARDDISK_fef12aab-9308-4371-8ae3-fd48e064f393-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fef12aab-9308-4371-8ae3-fd48e064f393-part16', 'scsi-SQEMU_QEMU_HARDDISK_fef12aab-9308-4371-8ae3-fd48e064f393-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:01:19.480145 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:01:19.480157 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--45fdc78c--b598--5156--b36d--ba4cd7c12386-osd--block--45fdc78c--b598--5156--b36d--ba4cd7c12386'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-5NLxCA-3OmV-UzBj-h29u-hGxB-8QDS-1x2KeN', 'scsi-0QEMU_QEMU_HARDDISK_f65971dd-3d8e-4ccb-8892-9cef1457b08b', 'scsi-SQEMU_QEMU_HARDDISK_f65971dd-3d8e-4ccb-8892-9cef1457b08b'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:01:19.480161 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--2b5d6da3--626f--5c09--a421--20ac1510e3d2-osd--block--2b5d6da3--626f--5c09--a421--20ac1510e3d2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Cfj8Wt-Pc6p-KnzT-OxB4-bn7U-Wz17-huOjFT', 'scsi-0QEMU_QEMU_HARDDISK_8140ca94-7747-4c81-b89b-0d83b2f23451', 'scsi-SQEMU_QEMU_HARDDISK_8140ca94-7747-4c81-b89b-0d83b2f23451'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:01:19.480164 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:01:19.480168 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1afbae95-f964-4c90-9c71-9e7629ff9c63', 'scsi-SQEMU_QEMU_HARDDISK_1afbae95-f964-4c90-9c71-9e7629ff9c63'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:01:19.480174 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:01:19.480182 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-17-00-03-36-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:01:19.480185 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:01:19.480189 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:01:19.480192 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:01:19.480195 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:01:19.480199 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:01:19.480204 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3c41c00e--01b2--5de9--9d7e--31888b7f9771-osd--block--3c41c00e--01b2--5de9--9d7e--31888b7f9771', 'dm-uuid-LVM-ysKBsJ06fJzf8zFG7udyrvhcFSNLKkXdy6bdG43Y2KDBxreI5nZ2cT5mbzpD8z3f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:01:19.480211 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b1b21aa2--16de--5cd3--9497--37bc0f66c5a5-osd--block--b1b21aa2--16de--5cd3--9497--37bc0f66c5a5', 'dm-uuid-LVM-iBbCl8l2Cp21TLsab0LZb9UpEOZERcnCml6xqtK2mvkLdOPZUie3k4LRW6n3GT3Z'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:01:19.480215 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e08813e-a36b-44d4-8c45-37d944b877b3', 'scsi-SQEMU_QEMU_HARDDISK_6e08813e-a36b-44d4-8c45-37d944b877b3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e08813e-a36b-44d4-8c45-37d944b877b3-part1', 'scsi-SQEMU_QEMU_HARDDISK_6e08813e-a36b-44d4-8c45-37d944b877b3-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e08813e-a36b-44d4-8c45-37d944b877b3-part14', 'scsi-SQEMU_QEMU_HARDDISK_6e08813e-a36b-44d4-8c45-37d944b877b3-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e08813e-a36b-44d4-8c45-37d944b877b3-part15', 'scsi-SQEMU_QEMU_HARDDISK_6e08813e-a36b-44d4-8c45-37d944b877b3-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e08813e-a36b-44d4-8c45-37d944b877b3-part16', 'scsi-SQEMU_QEMU_HARDDISK_6e08813e-a36b-44d4-8c45-37d944b877b3-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:01:19.480221 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:01:19.480226 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--dc88f193--a403--571c--9716--867079cb0a77-osd--block--dc88f193--a403--571c--9716--867079cb0a77'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ZZigmv-scWw-dS3h-Kt7R-sNr1-R177-KHwZlS', 'scsi-0QEMU_QEMU_HARDDISK_83f9c1ee-a593-4773-9f19-cdbbc5179b15', 'scsi-SQEMU_QEMU_HARDDISK_83f9c1ee-a593-4773-9f19-cdbbc5179b15'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:01:19.480231 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:01:19.480235 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--9e905ad0--9805--5328--aec5--92944dddbd57-osd--block--9e905ad0--9805--5328--aec5--92944dddbd57'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-GfGyPa-pmmX-8xWI-43WZ-LBYv-bkS1-Kty7h3', 'scsi-0QEMU_QEMU_HARDDISK_a8e3ed1c-2f99-41d3-ad10-61535a4cd08c', 'scsi-SQEMU_QEMU_HARDDISK_a8e3ed1c-2f99-41d3-ad10-61535a4cd08c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:01:19.480238 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:01:19.480241 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_89f9da0d-6b93-4417-9f39-e48f14dc47e8', 'scsi-SQEMU_QEMU_HARDDISK_89f9da0d-6b93-4417-9f39-e48f14dc47e8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:01:19.480248 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:01:19.480255 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:01:19.480258 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-17-00-03-09-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:01:19.480261 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:01:19.480265 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:01:19.480268 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:01:19.480274 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:01:19.480282 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dfd38aa9-0273-4d0b-842d-83e2b920901d', 'scsi-SQEMU_QEMU_HARDDISK_dfd38aa9-0273-4d0b-842d-83e2b920901d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dfd38aa9-0273-4d0b-842d-83e2b920901d-part1', 'scsi-SQEMU_QEMU_HARDDISK_dfd38aa9-0273-4d0b-842d-83e2b920901d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dfd38aa9-0273-4d0b-842d-83e2b920901d-part14', 'scsi-SQEMU_QEMU_HARDDISK_dfd38aa9-0273-4d0b-842d-83e2b920901d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dfd38aa9-0273-4d0b-842d-83e2b920901d-part15', 'scsi-SQEMU_QEMU_HARDDISK_dfd38aa9-0273-4d0b-842d-83e2b920901d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dfd38aa9-0273-4d0b-842d-83e2b920901d-part16', 'scsi-SQEMU_QEMU_HARDDISK_dfd38aa9-0273-4d0b-842d-83e2b920901d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:01:19.480286 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--3c41c00e--01b2--5de9--9d7e--31888b7f9771-osd--block--3c41c00e--01b2--5de9--9d7e--31888b7f9771'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-bK2hhS-QERv-lWih-5lqM-caSH-wXWc-LdaWOA', 'scsi-0QEMU_QEMU_HARDDISK_304f2e06-033e-4696-8bcf-5d7e9425b0ee', 'scsi-SQEMU_QEMU_HARDDISK_304f2e06-033e-4696-8bcf-5d7e9425b0ee'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:01:19.480289 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--b1b21aa2--16de--5cd3--9497--37bc0f66c5a5-osd--block--b1b21aa2--16de--5cd3--9497--37bc0f66c5a5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-CF7sTO-CKEf-1owX-tMHc-nUDz-6un9-8O9zaO', 'scsi-0QEMU_QEMU_HARDDISK_d33e80f7-c5e3-468e-989c-76b1c28adee9', 'scsi-SQEMU_QEMU_HARDDISK_d33e80f7-c5e3-468e-989c-76b1c28adee9'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:01:19.480295 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fe0d5661-edac-468e-9d1d-014c3e419a65', 'scsi-SQEMU_QEMU_HARDDISK_fe0d5661-edac-468e-9d1d-014c3e419a65'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:01:19.480303 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-17-00-03-45-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:01:19.480307 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:01:19.480310 | orchestrator | 2026-03-17 01:01:19.480313 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-17 01:01:19.480316 | orchestrator | Tuesday 17 March 2026 00:59:29 +0000 (0:00:00.467) 0:00:17.239 ********* 2026-03-17 01:01:19.480319 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:01:19.480322 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:01:19.480325 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:01:19.480329 | orchestrator | 2026-03-17 01:01:19.480332 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-17 01:01:19.480336 | orchestrator | Tuesday 17 March 2026 00:59:30 +0000 (0:00:00.622) 0:00:17.862 ********* 2026-03-17 01:01:19.480341 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:01:19.480346 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:01:19.480351 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:01:19.480356 | orchestrator | 2026-03-17 01:01:19.480361 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-17 01:01:19.480365 | orchestrator | Tuesday 17 March 2026 00:59:30 +0000 (0:00:00.374) 0:00:18.237 ********* 2026-03-17 01:01:19.480369 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:01:19.480374 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:01:19.480378 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:01:19.480384 | orchestrator | 2026-03-17 01:01:19.480391 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-17 01:01:19.480399 | orchestrator | Tuesday 17 March 2026 00:59:31 +0000 (0:00:00.586) 0:00:18.823 ********* 2026-03-17 01:01:19.480404 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:01:19.480408 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:01:19.480413 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:01:19.480421 | orchestrator | 2026-03-17 01:01:19.480426 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-17 01:01:19.480431 | orchestrator | Tuesday 17 March 2026 00:59:31 +0000 (0:00:00.247) 0:00:19.071 ********* 2026-03-17 01:01:19.480435 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:01:19.480440 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:01:19.480445 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:01:19.480450 | orchestrator | 2026-03-17 01:01:19.480454 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-17 01:01:19.480459 | orchestrator | Tuesday 17 March 2026 00:59:31 +0000 (0:00:00.349) 0:00:19.421 ********* 2026-03-17 01:01:19.480463 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:01:19.480468 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:01:19.480473 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:01:19.480477 | orchestrator | 2026-03-17 01:01:19.480482 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-17 01:01:19.480487 | orchestrator | Tuesday 17 March 2026 00:59:32 +0000 (0:00:00.375) 0:00:19.796 ********* 2026-03-17 01:01:19.480492 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-17 01:01:19.480497 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-17 01:01:19.480502 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-17 01:01:19.480507 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-17 01:01:19.480512 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-17 01:01:19.480518 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-17 01:01:19.480522 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-17 01:01:19.480526 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-17 01:01:19.480529 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-17 01:01:19.480533 | orchestrator | 2026-03-17 01:01:19.480536 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-17 01:01:19.480540 | orchestrator | Tuesday 17 March 2026 00:59:33 +0000 (0:00:00.764) 0:00:20.560 ********* 2026-03-17 01:01:19.480543 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-17 01:01:19.480547 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-17 01:01:19.480551 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-17 01:01:19.480554 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:01:19.480558 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-17 01:01:19.480561 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-17 01:01:19.480565 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-17 01:01:19.480569 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:01:19.480572 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-17 01:01:19.480576 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-17 01:01:19.480579 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-17 01:01:19.480583 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:01:19.480586 | orchestrator | 2026-03-17 01:01:19.480590 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-17 01:01:19.480593 | orchestrator | Tuesday 17 March 2026 00:59:33 +0000 (0:00:00.280) 0:00:20.841 ********* 2026-03-17 01:01:19.480598 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 01:01:19.480601 | orchestrator | 2026-03-17 01:01:19.480605 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-17 01:01:19.480612 | orchestrator | Tuesday 17 March 2026 00:59:33 +0000 (0:00:00.465) 0:00:21.306 ********* 2026-03-17 01:01:19.480619 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:01:19.480623 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:01:19.480627 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:01:19.480644 | orchestrator | 2026-03-17 01:01:19.480653 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-17 01:01:19.480658 | orchestrator | Tuesday 17 March 2026 00:59:33 +0000 (0:00:00.244) 0:00:21.551 ********* 2026-03-17 01:01:19.480663 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:01:19.480667 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:01:19.480672 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:01:19.480678 | orchestrator | 2026-03-17 01:01:19.480683 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-17 01:01:19.480689 | orchestrator | Tuesday 17 March 2026 00:59:34 +0000 (0:00:00.223) 0:00:21.774 ********* 2026-03-17 01:01:19.480694 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:01:19.480699 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:01:19.480704 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:01:19.480707 | orchestrator | 2026-03-17 01:01:19.480711 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-17 01:01:19.480715 | orchestrator | Tuesday 17 March 2026 00:59:34 +0000 (0:00:00.250) 0:00:22.024 ********* 2026-03-17 01:01:19.480718 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:01:19.480722 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:01:19.480726 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:01:19.480729 | orchestrator | 2026-03-17 01:01:19.480733 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-17 01:01:19.480736 | orchestrator | Tuesday 17 March 2026 00:59:34 +0000 (0:00:00.453) 0:00:22.477 ********* 2026-03-17 01:01:19.480740 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-17 01:01:19.480743 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-17 01:01:19.480747 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-17 01:01:19.480751 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:01:19.480754 | orchestrator | 2026-03-17 01:01:19.480758 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-17 01:01:19.480761 | orchestrator | Tuesday 17 March 2026 00:59:35 +0000 (0:00:00.352) 0:00:22.829 ********* 2026-03-17 01:01:19.480765 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-17 01:01:19.480768 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-17 01:01:19.480772 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-17 01:01:19.480775 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:01:19.480780 | orchestrator | 2026-03-17 01:01:19.480795 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-17 01:01:19.480804 | orchestrator | Tuesday 17 March 2026 00:59:35 +0000 (0:00:00.324) 0:00:23.153 ********* 2026-03-17 01:01:19.480808 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-17 01:01:19.480813 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-17 01:01:19.480818 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-17 01:01:19.480824 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:01:19.480829 | orchestrator | 2026-03-17 01:01:19.480834 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-17 01:01:19.480839 | orchestrator | Tuesday 17 March 2026 00:59:35 +0000 (0:00:00.322) 0:00:23.476 ********* 2026-03-17 01:01:19.480844 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:01:19.480849 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:01:19.480855 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:01:19.480859 | orchestrator | 2026-03-17 01:01:19.480863 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-17 01:01:19.480868 | orchestrator | Tuesday 17 March 2026 00:59:36 +0000 (0:00:00.280) 0:00:23.756 ********* 2026-03-17 01:01:19.480878 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-17 01:01:19.480883 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-17 01:01:19.480888 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-17 01:01:19.480892 | orchestrator | 2026-03-17 01:01:19.480897 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-17 01:01:19.480901 | orchestrator | Tuesday 17 March 2026 00:59:36 +0000 (0:00:00.428) 0:00:24.185 ********* 2026-03-17 01:01:19.480906 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-17 01:01:19.480911 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-17 01:01:19.480915 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-17 01:01:19.480920 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-17 01:01:19.480925 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-17 01:01:19.480930 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-17 01:01:19.480935 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-17 01:01:19.480940 | orchestrator | 2026-03-17 01:01:19.480945 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-17 01:01:19.480950 | orchestrator | Tuesday 17 March 2026 00:59:37 +0000 (0:00:00.803) 0:00:24.988 ********* 2026-03-17 01:01:19.480955 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-17 01:01:19.480960 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-17 01:01:19.480972 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-17 01:01:19.480982 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-17 01:01:19.480988 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-17 01:01:19.480993 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-17 01:01:19.481002 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-17 01:01:19.481008 | orchestrator | 2026-03-17 01:01:19.481016 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-03-17 01:01:19.481022 | orchestrator | Tuesday 17 March 2026 00:59:39 +0000 (0:00:01.909) 0:00:26.897 ********* 2026-03-17 01:01:19.481027 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:01:19.481033 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:01:19.481038 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-03-17 01:01:19.481044 | orchestrator | 2026-03-17 01:01:19.481049 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-03-17 01:01:19.481055 | orchestrator | Tuesday 17 March 2026 00:59:39 +0000 (0:00:00.392) 0:00:27.290 ********* 2026-03-17 01:01:19.481061 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-17 01:01:19.481068 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-17 01:01:19.481073 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-17 01:01:19.481083 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-17 01:01:19.481089 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-17 01:01:19.481094 | orchestrator | 2026-03-17 01:01:19.481099 | orchestrator | TASK [generate keys] *********************************************************** 2026-03-17 01:01:19.481105 | orchestrator | Tuesday 17 March 2026 01:00:24 +0000 (0:00:44.671) 0:01:11.961 ********* 2026-03-17 01:01:19.481110 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-17 01:01:19.481115 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-17 01:01:19.481121 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-17 01:01:19.481127 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-17 01:01:19.481132 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-17 01:01:19.481138 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-17 01:01:19.481143 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-03-17 01:01:19.481148 | orchestrator | 2026-03-17 01:01:19.481154 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-03-17 01:01:19.481159 | orchestrator | Tuesday 17 March 2026 01:00:48 +0000 (0:00:23.889) 0:01:35.851 ********* 2026-03-17 01:01:19.481164 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-17 01:01:19.481169 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-17 01:01:19.481175 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-17 01:01:19.481181 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-17 01:01:19.481186 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-17 01:01:19.481191 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-17 01:01:19.481197 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-17 01:01:19.481202 | orchestrator | 2026-03-17 01:01:19.481207 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-03-17 01:01:19.481212 | orchestrator | Tuesday 17 March 2026 01:01:00 +0000 (0:00:12.506) 0:01:48.358 ********* 2026-03-17 01:01:19.481218 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-17 01:01:19.481223 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-17 01:01:19.481228 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-17 01:01:19.481234 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-17 01:01:19.481239 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-17 01:01:19.481247 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-17 01:01:19.481255 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-17 01:01:19.481260 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-17 01:01:19.481266 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-17 01:01:19.481271 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-17 01:01:19.481277 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-17 01:01:19.481284 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-17 01:01:19.481289 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-17 01:01:19.481293 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-17 01:01:19.481297 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-17 01:01:19.481302 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-17 01:01:19.481306 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-17 01:01:19.481311 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-17 01:01:19.481316 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-03-17 01:01:19.481320 | orchestrator | 2026-03-17 01:01:19.481325 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 01:01:19.481330 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-03-17 01:01:19.481335 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-03-17 01:01:19.481340 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-17 01:01:19.481345 | orchestrator | 2026-03-17 01:01:19.481351 | orchestrator | 2026-03-17 01:01:19.481357 | orchestrator | 2026-03-17 01:01:19.481361 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 01:01:19.481366 | orchestrator | Tuesday 17 March 2026 01:01:17 +0000 (0:00:16.873) 0:02:05.232 ********* 2026-03-17 01:01:19.481371 | orchestrator | =============================================================================== 2026-03-17 01:01:19.481376 | orchestrator | create openstack pool(s) ----------------------------------------------- 44.67s 2026-03-17 01:01:19.481380 | orchestrator | generate keys ---------------------------------------------------------- 23.89s 2026-03-17 01:01:19.481385 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 16.87s 2026-03-17 01:01:19.481390 | orchestrator | get keys from monitors ------------------------------------------------- 12.51s 2026-03-17 01:01:19.481394 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.84s 2026-03-17 01:01:19.481399 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 2.04s 2026-03-17 01:01:19.481404 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.91s 2026-03-17 01:01:19.481409 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.90s 2026-03-17 01:01:19.481415 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.81s 2026-03-17 01:01:19.481419 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.80s 2026-03-17 01:01:19.481422 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.78s 2026-03-17 01:01:19.481425 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.76s 2026-03-17 01:01:19.481428 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.63s 2026-03-17 01:01:19.481431 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.62s 2026-03-17 01:01:19.481434 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.61s 2026-03-17 01:01:19.481437 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.59s 2026-03-17 01:01:19.481440 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.55s 2026-03-17 01:01:19.481443 | orchestrator | ceph-facts : Set_fact discovered_interpreter_python if not previously set --- 0.47s 2026-03-17 01:01:19.481447 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.47s 2026-03-17 01:01:19.481454 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.47s 2026-03-17 01:01:19.481459 | orchestrator | 2026-03-17 01:01:19 | INFO  | Task 4453593d-75f5-459c-84af-4041168d0d65 is in state STARTED 2026-03-17 01:01:19.482186 | orchestrator | 2026-03-17 01:01:19 | INFO  | Task 3b407352-aa39-4aac-b51f-0f3c54b54480 is in state STARTED 2026-03-17 01:01:19.482220 | orchestrator | 2026-03-17 01:01:19 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:01:22.529960 | orchestrator | 2026-03-17 01:01:22 | INFO  | Task f59d8695-f998-4f4e-85cb-e6817cd7cbc1 is in state STARTED 2026-03-17 01:01:22.531880 | orchestrator | 2026-03-17 01:01:22 | INFO  | Task 4453593d-75f5-459c-84af-4041168d0d65 is in state STARTED 2026-03-17 01:01:22.533549 | orchestrator | 2026-03-17 01:01:22 | INFO  | Task 3b407352-aa39-4aac-b51f-0f3c54b54480 is in state STARTED 2026-03-17 01:01:22.533589 | orchestrator | 2026-03-17 01:01:22 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:01:25.574609 | orchestrator | 2026-03-17 01:01:25 | INFO  | Task f59d8695-f998-4f4e-85cb-e6817cd7cbc1 is in state STARTED 2026-03-17 01:01:25.577666 | orchestrator | 2026-03-17 01:01:25 | INFO  | Task 4453593d-75f5-459c-84af-4041168d0d65 is in state STARTED 2026-03-17 01:01:25.579852 | orchestrator | 2026-03-17 01:01:25 | INFO  | Task 3b407352-aa39-4aac-b51f-0f3c54b54480 is in state STARTED 2026-03-17 01:01:25.580566 | orchestrator | 2026-03-17 01:01:25 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:01:28.618749 | orchestrator | 2026-03-17 01:01:28 | INFO  | Task f59d8695-f998-4f4e-85cb-e6817cd7cbc1 is in state STARTED 2026-03-17 01:01:28.619689 | orchestrator | 2026-03-17 01:01:28 | INFO  | Task 4453593d-75f5-459c-84af-4041168d0d65 is in state STARTED 2026-03-17 01:01:28.623076 | orchestrator | 2026-03-17 01:01:28 | INFO  | Task 3b407352-aa39-4aac-b51f-0f3c54b54480 is in state STARTED 2026-03-17 01:01:28.623135 | orchestrator | 2026-03-17 01:01:28 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:01:31.671075 | orchestrator | 2026-03-17 01:01:31 | INFO  | Task f59d8695-f998-4f4e-85cb-e6817cd7cbc1 is in state STARTED 2026-03-17 01:01:31.674352 | orchestrator | 2026-03-17 01:01:31 | INFO  | Task 4453593d-75f5-459c-84af-4041168d0d65 is in state SUCCESS 2026-03-17 01:01:31.675338 | orchestrator | 2026-03-17 01:01:31.675374 | orchestrator | 2026-03-17 01:01:31.675380 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-17 01:01:31.675385 | orchestrator | 2026-03-17 01:01:31.675389 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-17 01:01:31.675394 | orchestrator | Tuesday 17 March 2026 00:59:57 +0000 (0:00:00.302) 0:00:00.302 ********* 2026-03-17 01:01:31.675398 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:01:31.675403 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:01:31.675407 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:01:31.675411 | orchestrator | 2026-03-17 01:01:31.675415 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-17 01:01:31.675419 | orchestrator | Tuesday 17 March 2026 00:59:57 +0000 (0:00:00.276) 0:00:00.578 ********* 2026-03-17 01:01:31.675423 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-03-17 01:01:31.675429 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-03-17 01:01:31.675435 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-03-17 01:01:31.675441 | orchestrator | 2026-03-17 01:01:31.675448 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-03-17 01:01:31.675457 | orchestrator | 2026-03-17 01:01:31.675464 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-17 01:01:31.675470 | orchestrator | Tuesday 17 March 2026 00:59:57 +0000 (0:00:00.293) 0:00:00.872 ********* 2026-03-17 01:01:31.675497 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:01:31.675505 | orchestrator | 2026-03-17 01:01:31.675510 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-03-17 01:01:31.675516 | orchestrator | Tuesday 17 March 2026 00:59:58 +0000 (0:00:00.596) 0:00:01.469 ********* 2026-03-17 01:01:31.675543 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-17 01:01:31.675566 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-17 01:01:31.675583 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-17 01:01:31.675591 | orchestrator | 2026-03-17 01:01:31.675597 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-03-17 01:01:31.675603 | orchestrator | Tuesday 17 March 2026 00:59:59 +0000 (0:00:01.533) 0:00:03.002 ********* 2026-03-17 01:01:31.675609 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:01:31.675615 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:01:31.675621 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:01:31.675627 | orchestrator | 2026-03-17 01:01:31.675632 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-17 01:01:31.675638 | orchestrator | Tuesday 17 March 2026 01:00:00 +0000 (0:00:00.270) 0:00:03.272 ********* 2026-03-17 01:01:31.675645 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-17 01:01:31.675655 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-17 01:01:31.675662 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-03-17 01:01:31.675668 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-03-17 01:01:31.675674 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-03-17 01:01:31.675685 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-17 01:01:31.675839 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-03-17 01:01:31.675844 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-17 01:01:31.675848 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-03-17 01:01:31.675852 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-03-17 01:01:31.675856 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-03-17 01:01:31.675860 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-03-17 01:01:31.675863 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-03-17 01:01:31.675930 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-03-17 01:01:31.675935 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-03-17 01:01:31.675939 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-03-17 01:01:31.675943 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-17 01:01:31.675947 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-17 01:01:31.675951 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-03-17 01:01:31.675955 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-03-17 01:01:31.675958 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-03-17 01:01:31.675962 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-03-17 01:01:31.676536 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-03-17 01:01:31.676551 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-03-17 01:01:31.676560 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-03-17 01:01:31.676569 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-03-17 01:01:31.676576 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-03-17 01:01:31.676581 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-03-17 01:01:31.676586 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-03-17 01:01:31.676598 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-03-17 01:01:31.676603 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-03-17 01:01:31.676606 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-03-17 01:01:31.676610 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-03-17 01:01:31.676615 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-03-17 01:01:31.676628 | orchestrator | 2026-03-17 01:01:31.676632 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-17 01:01:31.676636 | orchestrator | Tuesday 17 March 2026 01:00:00 +0000 (0:00:00.690) 0:00:03.962 ********* 2026-03-17 01:01:31.676640 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:01:31.676644 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:01:31.676648 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:01:31.676652 | orchestrator | 2026-03-17 01:01:31.676657 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-17 01:01:31.676663 | orchestrator | Tuesday 17 March 2026 01:00:01 +0000 (0:00:00.478) 0:00:04.441 ********* 2026-03-17 01:01:31.676669 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:01:31.676678 | orchestrator | 2026-03-17 01:01:31.676696 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-17 01:01:31.676702 | orchestrator | Tuesday 17 March 2026 01:00:01 +0000 (0:00:00.129) 0:00:04.570 ********* 2026-03-17 01:01:31.676707 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:01:31.676714 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:01:31.676720 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:01:31.676725 | orchestrator | 2026-03-17 01:01:31.676731 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-17 01:01:31.676737 | orchestrator | Tuesday 17 March 2026 01:00:01 +0000 (0:00:00.262) 0:00:04.833 ********* 2026-03-17 01:01:31.676743 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:01:31.676748 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:01:31.676754 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:01:31.676760 | orchestrator | 2026-03-17 01:01:31.676765 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-17 01:01:31.676770 | orchestrator | Tuesday 17 March 2026 01:00:02 +0000 (0:00:00.288) 0:00:05.122 ********* 2026-03-17 01:01:31.676798 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:01:31.676805 | orchestrator | 2026-03-17 01:01:31.676811 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-17 01:01:31.676817 | orchestrator | Tuesday 17 March 2026 01:00:02 +0000 (0:00:00.128) 0:00:05.250 ********* 2026-03-17 01:01:31.676823 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:01:31.676829 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:01:31.676835 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:01:31.676841 | orchestrator | 2026-03-17 01:01:31.676847 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-17 01:01:31.676853 | orchestrator | Tuesday 17 March 2026 01:00:02 +0000 (0:00:00.451) 0:00:05.702 ********* 2026-03-17 01:01:31.676859 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:01:31.676865 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:01:31.676871 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:01:31.676877 | orchestrator | 2026-03-17 01:01:31.676883 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-17 01:01:31.676889 | orchestrator | Tuesday 17 March 2026 01:00:02 +0000 (0:00:00.295) 0:00:05.997 ********* 2026-03-17 01:01:31.676895 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:01:31.676901 | orchestrator | 2026-03-17 01:01:31.676906 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-17 01:01:31.676911 | orchestrator | Tuesday 17 March 2026 01:00:03 +0000 (0:00:00.109) 0:00:06.107 ********* 2026-03-17 01:01:31.676917 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:01:31.676987 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:01:31.676994 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:01:31.676999 | orchestrator | 2026-03-17 01:01:31.677005 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-17 01:01:31.677010 | orchestrator | Tuesday 17 March 2026 01:00:03 +0000 (0:00:00.280) 0:00:06.387 ********* 2026-03-17 01:01:31.677015 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:01:31.677021 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:01:31.677026 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:01:31.677041 | orchestrator | 2026-03-17 01:01:31.677046 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-17 01:01:31.677054 | orchestrator | Tuesday 17 March 2026 01:00:03 +0000 (0:00:00.291) 0:00:06.679 ********* 2026-03-17 01:01:31.677061 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:01:31.677069 | orchestrator | 2026-03-17 01:01:31.677075 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-17 01:01:31.677081 | orchestrator | Tuesday 17 March 2026 01:00:03 +0000 (0:00:00.111) 0:00:06.790 ********* 2026-03-17 01:01:31.677088 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:01:31.677094 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:01:31.677099 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:01:31.677105 | orchestrator | 2026-03-17 01:01:31.677111 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-17 01:01:31.677116 | orchestrator | Tuesday 17 March 2026 01:00:04 +0000 (0:00:00.439) 0:00:07.230 ********* 2026-03-17 01:01:31.677124 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:01:31.677129 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:01:31.677136 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:01:31.677142 | orchestrator | 2026-03-17 01:01:31.677147 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-17 01:01:31.677160 | orchestrator | Tuesday 17 March 2026 01:00:04 +0000 (0:00:00.279) 0:00:07.509 ********* 2026-03-17 01:01:31.677167 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:01:31.677173 | orchestrator | 2026-03-17 01:01:31.677179 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-17 01:01:31.677186 | orchestrator | Tuesday 17 March 2026 01:00:04 +0000 (0:00:00.129) 0:00:07.638 ********* 2026-03-17 01:01:31.677192 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:01:31.677199 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:01:31.677204 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:01:31.677209 | orchestrator | 2026-03-17 01:01:31.677213 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-17 01:01:31.677217 | orchestrator | Tuesday 17 March 2026 01:00:04 +0000 (0:00:00.271) 0:00:07.910 ********* 2026-03-17 01:01:31.677269 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:01:31.677325 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:01:31.677329 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:01:31.677333 | orchestrator | 2026-03-17 01:01:31.677337 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-17 01:01:31.677341 | orchestrator | Tuesday 17 March 2026 01:00:05 +0000 (0:00:00.283) 0:00:08.193 ********* 2026-03-17 01:01:31.677345 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:01:31.677348 | orchestrator | 2026-03-17 01:01:31.677352 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-17 01:01:31.677356 | orchestrator | Tuesday 17 March 2026 01:00:05 +0000 (0:00:00.307) 0:00:08.501 ********* 2026-03-17 01:01:31.677360 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:01:31.677364 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:01:31.677368 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:01:31.677371 | orchestrator | 2026-03-17 01:01:31.677375 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-17 01:01:31.677390 | orchestrator | Tuesday 17 March 2026 01:00:05 +0000 (0:00:00.269) 0:00:08.771 ********* 2026-03-17 01:01:31.677397 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:01:31.677406 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:01:31.677412 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:01:31.677417 | orchestrator | 2026-03-17 01:01:31.677423 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-17 01:01:31.677428 | orchestrator | Tuesday 17 March 2026 01:00:05 +0000 (0:00:00.260) 0:00:09.032 ********* 2026-03-17 01:01:31.677434 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:01:31.677440 | orchestrator | 2026-03-17 01:01:31.677445 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-17 01:01:31.677451 | orchestrator | Tuesday 17 March 2026 01:00:06 +0000 (0:00:00.113) 0:00:09.145 ********* 2026-03-17 01:01:31.677465 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:01:31.677471 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:01:31.677476 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:01:31.677482 | orchestrator | 2026-03-17 01:01:31.677488 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-17 01:01:31.677495 | orchestrator | Tuesday 17 March 2026 01:00:06 +0000 (0:00:00.242) 0:00:09.388 ********* 2026-03-17 01:01:31.677528 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:01:31.677533 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:01:31.677537 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:01:31.677541 | orchestrator | 2026-03-17 01:01:31.677545 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-17 01:01:31.677549 | orchestrator | Tuesday 17 March 2026 01:00:06 +0000 (0:00:00.417) 0:00:09.805 ********* 2026-03-17 01:01:31.677553 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:01:31.677557 | orchestrator | 2026-03-17 01:01:31.677561 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-17 01:01:31.677594 | orchestrator | Tuesday 17 March 2026 01:00:06 +0000 (0:00:00.127) 0:00:09.932 ********* 2026-03-17 01:01:31.677598 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:01:31.677602 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:01:31.677606 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:01:31.677610 | orchestrator | 2026-03-17 01:01:31.677615 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-17 01:01:31.677619 | orchestrator | Tuesday 17 March 2026 01:00:07 +0000 (0:00:00.247) 0:00:10.180 ********* 2026-03-17 01:01:31.677623 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:01:31.677627 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:01:31.677631 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:01:31.677635 | orchestrator | 2026-03-17 01:01:31.677640 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-17 01:01:31.677644 | orchestrator | Tuesday 17 March 2026 01:00:07 +0000 (0:00:00.283) 0:00:10.463 ********* 2026-03-17 01:01:31.677648 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:01:31.677652 | orchestrator | 2026-03-17 01:01:31.677656 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-17 01:01:31.677660 | orchestrator | Tuesday 17 March 2026 01:00:07 +0000 (0:00:00.095) 0:00:10.558 ********* 2026-03-17 01:01:31.677665 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:01:31.677669 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:01:31.677673 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:01:31.677677 | orchestrator | 2026-03-17 01:01:31.677681 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-17 01:01:31.677685 | orchestrator | Tuesday 17 March 2026 01:00:07 +0000 (0:00:00.259) 0:00:10.818 ********* 2026-03-17 01:01:31.677689 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:01:31.677693 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:01:31.677697 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:01:31.677702 | orchestrator | 2026-03-17 01:01:31.677706 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-17 01:01:31.677710 | orchestrator | Tuesday 17 March 2026 01:00:08 +0000 (0:00:00.361) 0:00:11.179 ********* 2026-03-17 01:01:31.677714 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:01:31.677718 | orchestrator | 2026-03-17 01:01:31.677722 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-17 01:01:31.677726 | orchestrator | Tuesday 17 March 2026 01:00:08 +0000 (0:00:00.119) 0:00:11.299 ********* 2026-03-17 01:01:31.677730 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:01:31.677734 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:01:31.677739 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:01:31.677743 | orchestrator | 2026-03-17 01:01:31.677747 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-03-17 01:01:31.677752 | orchestrator | Tuesday 17 March 2026 01:00:08 +0000 (0:00:00.239) 0:00:11.538 ********* 2026-03-17 01:01:31.677761 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:01:31.677765 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:01:31.677769 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:01:31.677773 | orchestrator | 2026-03-17 01:01:31.677881 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-03-17 01:01:31.677886 | orchestrator | Tuesday 17 March 2026 01:00:09 +0000 (0:00:01.425) 0:00:12.964 ********* 2026-03-17 01:01:31.677890 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-17 01:01:31.677895 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-17 01:01:31.677899 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-17 01:01:31.677903 | orchestrator | 2026-03-17 01:01:31.677906 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-03-17 01:01:31.677911 | orchestrator | Tuesday 17 March 2026 01:00:11 +0000 (0:00:01.990) 0:00:14.955 ********* 2026-03-17 01:01:31.677914 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-17 01:01:31.677938 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-17 01:01:31.677942 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-17 01:01:31.677946 | orchestrator | 2026-03-17 01:01:31.677950 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-03-17 01:01:31.677961 | orchestrator | Tuesday 17 March 2026 01:00:13 +0000 (0:00:01.955) 0:00:16.911 ********* 2026-03-17 01:01:31.677965 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-17 01:01:31.677969 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-17 01:01:31.677973 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-17 01:01:31.677977 | orchestrator | 2026-03-17 01:01:31.677981 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-03-17 01:01:31.677985 | orchestrator | Tuesday 17 March 2026 01:00:15 +0000 (0:00:01.481) 0:00:18.392 ********* 2026-03-17 01:01:31.678065 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:01:31.678072 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:01:31.678076 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:01:31.678080 | orchestrator | 2026-03-17 01:01:31.678084 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-03-17 01:01:31.678088 | orchestrator | Tuesday 17 March 2026 01:00:15 +0000 (0:00:00.260) 0:00:18.652 ********* 2026-03-17 01:01:31.678092 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:01:31.678096 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:01:31.678100 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:01:31.678104 | orchestrator | 2026-03-17 01:01:31.678108 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-17 01:01:31.678112 | orchestrator | Tuesday 17 March 2026 01:00:15 +0000 (0:00:00.266) 0:00:18.919 ********* 2026-03-17 01:01:31.678116 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:01:31.678119 | orchestrator | 2026-03-17 01:01:31.678123 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-03-17 01:01:31.678127 | orchestrator | Tuesday 17 March 2026 01:00:16 +0000 (0:00:00.750) 0:00:19.670 ********* 2026-03-17 01:01:31.678137 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-17 01:01:31.678171 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-17 01:01:31.678180 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-17 01:01:31.678188 | orchestrator | 2026-03-17 01:01:31.678193 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-03-17 01:01:31.678196 | orchestrator | Tuesday 17 March 2026 01:00:18 +0000 (0:00:01.633) 0:00:21.303 ********* 2026-03-17 01:01:31.678205 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-17 01:01:31.678213 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:01:31.678222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-17 01:01:31.678227 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:01:31.678231 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-17 01:01:31.678239 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:01:31.678243 | orchestrator | 2026-03-17 01:01:31.678247 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-03-17 01:01:31.678251 | orchestrator | Tuesday 17 March 2026 01:00:19 +0000 (0:00:00.865) 0:00:22.170 ********* 2026-03-17 01:01:31.678262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-17 01:01:31.678292 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:01:31.678297 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-17 01:01:31.678305 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:01:31.678317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-17 01:01:31.678321 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:01:31.678325 | orchestrator | 2026-03-17 01:01:31.678329 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2026-03-17 01:01:31.678332 | orchestrator | Tuesday 17 March 2026 01:00:20 +0000 (0:00:01.200) 0:00:23.370 ********* 2026-03-17 01:01:31.678344 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-17 01:01:31.678352 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-17 01:01:31.678363 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-17 01:01:31.678368 | orchestrator | 2026-03-17 01:01:31.678372 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-17 01:01:31.678375 | orchestrator | Tuesday 17 March 2026 01:00:21 +0000 (0:00:01.335) 0:00:24.705 ********* 2026-03-17 01:01:31.678379 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:01:31.678383 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:01:31.678387 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:01:31.678391 | orchestrator | 2026-03-17 01:01:31.678395 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-17 01:01:31.678398 | orchestrator | Tuesday 17 March 2026 01:00:21 +0000 (0:00:00.274) 0:00:24.980 ********* 2026-03-17 01:01:31.678402 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:01:31.678406 | orchestrator | 2026-03-17 01:01:31.678530 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-03-17 01:01:31.678539 | orchestrator | Tuesday 17 March 2026 01:00:22 +0000 (0:00:00.662) 0:00:25.643 ********* 2026-03-17 01:01:31.678543 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:01:31.678546 | orchestrator | 2026-03-17 01:01:31.678550 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-03-17 01:01:31.678590 | orchestrator | Tuesday 17 March 2026 01:00:24 +0000 (0:00:02.354) 0:00:27.997 ********* 2026-03-17 01:01:31.678595 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:01:31.678599 | orchestrator | 2026-03-17 01:01:31.678602 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-03-17 01:01:31.678606 | orchestrator | Tuesday 17 March 2026 01:00:27 +0000 (0:00:02.149) 0:00:30.146 ********* 2026-03-17 01:01:31.678614 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:01:31.678617 | orchestrator | 2026-03-17 01:01:31.678621 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-17 01:01:31.678625 | orchestrator | Tuesday 17 March 2026 01:00:43 +0000 (0:00:16.227) 0:00:46.373 ********* 2026-03-17 01:01:31.678629 | orchestrator | 2026-03-17 01:01:31.678633 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-17 01:01:31.678636 | orchestrator | Tuesday 17 March 2026 01:00:43 +0000 (0:00:00.063) 0:00:46.436 ********* 2026-03-17 01:01:31.678640 | orchestrator | 2026-03-17 01:01:31.678644 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-17 01:01:31.678648 | orchestrator | Tuesday 17 March 2026 01:00:43 +0000 (0:00:00.059) 0:00:46.496 ********* 2026-03-17 01:01:31.678651 | orchestrator | 2026-03-17 01:01:31.678655 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-03-17 01:01:31.678659 | orchestrator | Tuesday 17 March 2026 01:00:43 +0000 (0:00:00.065) 0:00:46.561 ********* 2026-03-17 01:01:31.678663 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:01:31.678667 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:01:31.678670 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:01:31.678674 | orchestrator | 2026-03-17 01:01:31.678678 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 01:01:31.678682 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-17 01:01:31.678687 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-03-17 01:01:31.678691 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-03-17 01:01:31.678695 | orchestrator | 2026-03-17 01:01:31.678698 | orchestrator | 2026-03-17 01:01:31.678702 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 01:01:31.678706 | orchestrator | Tuesday 17 March 2026 01:01:29 +0000 (0:00:45.826) 0:01:32.388 ********* 2026-03-17 01:01:31.678710 | orchestrator | =============================================================================== 2026-03-17 01:01:31.678713 | orchestrator | horizon : Restart horizon container ------------------------------------ 45.83s 2026-03-17 01:01:31.678717 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 16.23s 2026-03-17 01:01:31.678722 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.35s 2026-03-17 01:01:31.678728 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.15s 2026-03-17 01:01:31.678734 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.99s 2026-03-17 01:01:31.678742 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 1.96s 2026-03-17 01:01:31.678750 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.63s 2026-03-17 01:01:31.678759 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.53s 2026-03-17 01:01:31.678765 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.48s 2026-03-17 01:01:31.678856 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.43s 2026-03-17 01:01:31.678867 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.34s 2026-03-17 01:01:31.678872 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.20s 2026-03-17 01:01:31.678878 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.87s 2026-03-17 01:01:31.678884 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.75s 2026-03-17 01:01:31.678926 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.69s 2026-03-17 01:01:31.678932 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.66s 2026-03-17 01:01:31.678948 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.60s 2026-03-17 01:01:31.678954 | orchestrator | horizon : Update policy file name --------------------------------------- 0.48s 2026-03-17 01:01:31.678961 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.45s 2026-03-17 01:01:31.678967 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.44s 2026-03-17 01:01:31.678974 | orchestrator | 2026-03-17 01:01:31 | INFO  | Task 3b407352-aa39-4aac-b51f-0f3c54b54480 is in state STARTED 2026-03-17 01:01:31.678980 | orchestrator | 2026-03-17 01:01:31 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:01:34.718590 | orchestrator | 2026-03-17 01:01:34 | INFO  | Task f59d8695-f998-4f4e-85cb-e6817cd7cbc1 is in state STARTED 2026-03-17 01:01:34.719649 | orchestrator | 2026-03-17 01:01:34 | INFO  | Task 3b407352-aa39-4aac-b51f-0f3c54b54480 is in state STARTED 2026-03-17 01:01:34.720219 | orchestrator | 2026-03-17 01:01:34 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:01:37.775991 | orchestrator | 2026-03-17 01:01:37 | INFO  | Task f59d8695-f998-4f4e-85cb-e6817cd7cbc1 is in state STARTED 2026-03-17 01:01:37.777440 | orchestrator | 2026-03-17 01:01:37 | INFO  | Task 3b407352-aa39-4aac-b51f-0f3c54b54480 is in state STARTED 2026-03-17 01:01:37.777910 | orchestrator | 2026-03-17 01:01:37 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:01:40.831946 | orchestrator | 2026-03-17 01:01:40 | INFO  | Task f59d8695-f998-4f4e-85cb-e6817cd7cbc1 is in state STARTED 2026-03-17 01:01:40.833376 | orchestrator | 2026-03-17 01:01:40 | INFO  | Task 3b407352-aa39-4aac-b51f-0f3c54b54480 is in state STARTED 2026-03-17 01:01:40.833535 | orchestrator | 2026-03-17 01:01:40 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:01:43.886995 | orchestrator | 2026-03-17 01:01:43 | INFO  | Task f59d8695-f998-4f4e-85cb-e6817cd7cbc1 is in state STARTED 2026-03-17 01:01:43.889029 | orchestrator | 2026-03-17 01:01:43 | INFO  | Task 3b407352-aa39-4aac-b51f-0f3c54b54480 is in state STARTED 2026-03-17 01:01:43.889381 | orchestrator | 2026-03-17 01:01:43 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:01:46.938625 | orchestrator | 2026-03-17 01:01:46 | INFO  | Task f59d8695-f998-4f4e-85cb-e6817cd7cbc1 is in state STARTED 2026-03-17 01:01:46.940051 | orchestrator | 2026-03-17 01:01:46 | INFO  | Task 3b407352-aa39-4aac-b51f-0f3c54b54480 is in state STARTED 2026-03-17 01:01:46.940102 | orchestrator | 2026-03-17 01:01:46 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:01:49.976726 | orchestrator | 2026-03-17 01:01:49 | INFO  | Task f59d8695-f998-4f4e-85cb-e6817cd7cbc1 is in state STARTED 2026-03-17 01:01:49.980193 | orchestrator | 2026-03-17 01:01:49 | INFO  | Task 3b407352-aa39-4aac-b51f-0f3c54b54480 is in state STARTED 2026-03-17 01:01:49.980257 | orchestrator | 2026-03-17 01:01:49 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:01:53.029254 | orchestrator | 2026-03-17 01:01:53 | INFO  | Task f59d8695-f998-4f4e-85cb-e6817cd7cbc1 is in state STARTED 2026-03-17 01:01:53.031099 | orchestrator | 2026-03-17 01:01:53 | INFO  | Task 3b407352-aa39-4aac-b51f-0f3c54b54480 is in state STARTED 2026-03-17 01:01:53.031171 | orchestrator | 2026-03-17 01:01:53 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:01:56.080652 | orchestrator | 2026-03-17 01:01:56 | INFO  | Task f59d8695-f998-4f4e-85cb-e6817cd7cbc1 is in state STARTED 2026-03-17 01:01:56.081892 | orchestrator | 2026-03-17 01:01:56 | INFO  | Task 3d989fbe-ca8b-40d6-847d-a926826ecbdf is in state STARTED 2026-03-17 01:01:56.083778 | orchestrator | 2026-03-17 01:01:56 | INFO  | Task 3b407352-aa39-4aac-b51f-0f3c54b54480 is in state SUCCESS 2026-03-17 01:01:56.084064 | orchestrator | 2026-03-17 01:01:56 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:01:59.149229 | orchestrator | 2026-03-17 01:01:59 | INFO  | Task f59d8695-f998-4f4e-85cb-e6817cd7cbc1 is in state STARTED 2026-03-17 01:01:59.151397 | orchestrator | 2026-03-17 01:01:59 | INFO  | Task 3d989fbe-ca8b-40d6-847d-a926826ecbdf is in state STARTED 2026-03-17 01:01:59.151429 | orchestrator | 2026-03-17 01:01:59 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:02:02.210121 | orchestrator | 2026-03-17 01:02:02 | INFO  | Task f59d8695-f998-4f4e-85cb-e6817cd7cbc1 is in state STARTED 2026-03-17 01:02:02.211333 | orchestrator | 2026-03-17 01:02:02 | INFO  | Task 3d989fbe-ca8b-40d6-847d-a926826ecbdf is in state STARTED 2026-03-17 01:02:02.211395 | orchestrator | 2026-03-17 01:02:02 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:02:05.261306 | orchestrator | 2026-03-17 01:02:05 | INFO  | Task f59d8695-f998-4f4e-85cb-e6817cd7cbc1 is in state STARTED 2026-03-17 01:02:05.265949 | orchestrator | 2026-03-17 01:02:05 | INFO  | Task 3d989fbe-ca8b-40d6-847d-a926826ecbdf is in state STARTED 2026-03-17 01:02:05.266005 | orchestrator | 2026-03-17 01:02:05 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:02:08.306049 | orchestrator | 2026-03-17 01:02:08 | INFO  | Task f59d8695-f998-4f4e-85cb-e6817cd7cbc1 is in state STARTED 2026-03-17 01:02:08.310458 | orchestrator | 2026-03-17 01:02:08 | INFO  | Task 3d989fbe-ca8b-40d6-847d-a926826ecbdf is in state STARTED 2026-03-17 01:02:08.310529 | orchestrator | 2026-03-17 01:02:08 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:02:11.346656 | orchestrator | 2026-03-17 01:02:11 | INFO  | Task f59d8695-f998-4f4e-85cb-e6817cd7cbc1 is in state STARTED 2026-03-17 01:02:11.348104 | orchestrator | 2026-03-17 01:02:11 | INFO  | Task 3d989fbe-ca8b-40d6-847d-a926826ecbdf is in state STARTED 2026-03-17 01:02:11.348162 | orchestrator | 2026-03-17 01:02:11 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:02:14.392499 | orchestrator | 2026-03-17 01:02:14 | INFO  | Task f59d8695-f998-4f4e-85cb-e6817cd7cbc1 is in state STARTED 2026-03-17 01:02:14.394118 | orchestrator | 2026-03-17 01:02:14 | INFO  | Task 3d989fbe-ca8b-40d6-847d-a926826ecbdf is in state STARTED 2026-03-17 01:02:14.394170 | orchestrator | 2026-03-17 01:02:14 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:02:17.441713 | orchestrator | 2026-03-17 01:02:17 | INFO  | Task f59d8695-f998-4f4e-85cb-e6817cd7cbc1 is in state STARTED 2026-03-17 01:02:17.445974 | orchestrator | 2026-03-17 01:02:17 | INFO  | Task 3d989fbe-ca8b-40d6-847d-a926826ecbdf is in state STARTED 2026-03-17 01:02:17.448019 | orchestrator | 2026-03-17 01:02:17 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:02:20.486177 | orchestrator | 2026-03-17 01:02:20 | INFO  | Task f59d8695-f998-4f4e-85cb-e6817cd7cbc1 is in state STARTED 2026-03-17 01:02:20.488228 | orchestrator | 2026-03-17 01:02:20 | INFO  | Task 3d989fbe-ca8b-40d6-847d-a926826ecbdf is in state STARTED 2026-03-17 01:02:20.488282 | orchestrator | 2026-03-17 01:02:20 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:02:23.527656 | orchestrator | 2026-03-17 01:02:23 | INFO  | Task f59d8695-f998-4f4e-85cb-e6817cd7cbc1 is in state STARTED 2026-03-17 01:02:23.527796 | orchestrator | 2026-03-17 01:02:23 | INFO  | Task 3d989fbe-ca8b-40d6-847d-a926826ecbdf is in state STARTED 2026-03-17 01:02:23.527808 | orchestrator | 2026-03-17 01:02:23 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:02:26.568936 | orchestrator | 2026-03-17 01:02:26 | INFO  | Task f59d8695-f998-4f4e-85cb-e6817cd7cbc1 is in state STARTED 2026-03-17 01:02:26.570299 | orchestrator | 2026-03-17 01:02:26 | INFO  | Task 3d989fbe-ca8b-40d6-847d-a926826ecbdf is in state STARTED 2026-03-17 01:02:26.570365 | orchestrator | 2026-03-17 01:02:26 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:02:29.610582 | orchestrator | 2026-03-17 01:02:29 | INFO  | Task f59d8695-f998-4f4e-85cb-e6817cd7cbc1 is in state STARTED 2026-03-17 01:02:29.613111 | orchestrator | 2026-03-17 01:02:29 | INFO  | Task 3d989fbe-ca8b-40d6-847d-a926826ecbdf is in state STARTED 2026-03-17 01:02:29.613173 | orchestrator | 2026-03-17 01:02:29 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:02:32.651372 | orchestrator | 2026-03-17 01:02:32.651425 | orchestrator | 2026-03-17 01:02:32.651433 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-03-17 01:02:32.651441 | orchestrator | 2026-03-17 01:02:32.651448 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-03-17 01:02:32.651455 | orchestrator | Tuesday 17 March 2026 01:01:21 +0000 (0:00:00.233) 0:00:00.233 ********* 2026-03-17 01:02:32.651462 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-03-17 01:02:32.651507 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-17 01:02:32.651515 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-17 01:02:32.651530 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-03-17 01:02:32.651576 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-17 01:02:32.651585 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-03-17 01:02:32.651589 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-03-17 01:02:32.651593 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-03-17 01:02:32.651632 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-03-17 01:02:32.651641 | orchestrator | 2026-03-17 01:02:32.651648 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-03-17 01:02:32.651655 | orchestrator | Tuesday 17 March 2026 01:01:25 +0000 (0:00:04.149) 0:00:04.382 ********* 2026-03-17 01:02:32.651662 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-03-17 01:02:32.651668 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-17 01:02:32.651675 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-17 01:02:32.651829 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-03-17 01:02:32.651842 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-17 01:02:32.651848 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-03-17 01:02:32.651855 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-03-17 01:02:32.651861 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-03-17 01:02:32.651868 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-03-17 01:02:32.651875 | orchestrator | 2026-03-17 01:02:32.651881 | orchestrator | TASK [Create share directory] ************************************************** 2026-03-17 01:02:32.651887 | orchestrator | Tuesday 17 March 2026 01:01:29 +0000 (0:00:04.017) 0:00:08.400 ********* 2026-03-17 01:02:32.651909 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-17 01:02:32.651927 | orchestrator | 2026-03-17 01:02:32.651935 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-03-17 01:02:32.651941 | orchestrator | Tuesday 17 March 2026 01:01:30 +0000 (0:00:01.101) 0:00:09.502 ********* 2026-03-17 01:02:32.651948 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-03-17 01:02:32.651955 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-17 01:02:32.651961 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-17 01:02:32.651965 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-03-17 01:02:32.651969 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-17 01:02:32.651973 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-03-17 01:02:32.651977 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-03-17 01:02:32.651981 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-03-17 01:02:32.651984 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-03-17 01:02:32.651988 | orchestrator | 2026-03-17 01:02:32.651992 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-03-17 01:02:32.651995 | orchestrator | Tuesday 17 March 2026 01:01:44 +0000 (0:00:13.824) 0:00:23.326 ********* 2026-03-17 01:02:32.651999 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-03-17 01:02:32.652003 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-03-17 01:02:32.652007 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-03-17 01:02:32.652011 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-03-17 01:02:32.652022 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-03-17 01:02:32.652026 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-03-17 01:02:32.652029 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-03-17 01:02:32.652033 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-03-17 01:02:32.652037 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-03-17 01:02:32.652040 | orchestrator | 2026-03-17 01:02:32.652044 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-03-17 01:02:32.652049 | orchestrator | Tuesday 17 March 2026 01:01:47 +0000 (0:00:03.322) 0:00:26.649 ********* 2026-03-17 01:02:32.652056 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-03-17 01:02:32.652060 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-17 01:02:32.652064 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-17 01:02:32.652068 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-03-17 01:02:32.652071 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-17 01:02:32.652075 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-03-17 01:02:32.652079 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-03-17 01:02:32.652083 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-03-17 01:02:32.652086 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-03-17 01:02:32.652090 | orchestrator | 2026-03-17 01:02:32.652100 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 01:02:32.652104 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 01:02:32.652108 | orchestrator | 2026-03-17 01:02:32.652112 | orchestrator | 2026-03-17 01:02:32.652115 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 01:02:32.652119 | orchestrator | Tuesday 17 March 2026 01:01:54 +0000 (0:00:06.829) 0:00:33.478 ********* 2026-03-17 01:02:32.652123 | orchestrator | =============================================================================== 2026-03-17 01:02:32.652126 | orchestrator | Write ceph keys to the share directory --------------------------------- 13.82s 2026-03-17 01:02:32.652130 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.83s 2026-03-17 01:02:32.652134 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.15s 2026-03-17 01:02:32.652138 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.02s 2026-03-17 01:02:32.652141 | orchestrator | Check if target directories exist --------------------------------------- 3.32s 2026-03-17 01:02:32.652145 | orchestrator | Create share directory -------------------------------------------------- 1.10s 2026-03-17 01:02:32.652149 | orchestrator | 2026-03-17 01:02:32.652153 | orchestrator | 2026-03-17 01:02:32.652156 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-17 01:02:32.652160 | orchestrator | 2026-03-17 01:02:32.652164 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-17 01:02:32.652167 | orchestrator | Tuesday 17 March 2026 00:59:57 +0000 (0:00:00.314) 0:00:00.314 ********* 2026-03-17 01:02:32.652171 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:02:32.652175 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:02:32.652179 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:02:32.652183 | orchestrator | 2026-03-17 01:02:32.652186 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-17 01:02:32.652190 | orchestrator | Tuesday 17 March 2026 00:59:57 +0000 (0:00:00.286) 0:00:00.601 ********* 2026-03-17 01:02:32.652194 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-03-17 01:02:32.652198 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-03-17 01:02:32.652255 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-03-17 01:02:32.652260 | orchestrator | 2026-03-17 01:02:32.652263 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-03-17 01:02:32.652267 | orchestrator | 2026-03-17 01:02:32.652271 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-17 01:02:32.652275 | orchestrator | Tuesday 17 March 2026 00:59:57 +0000 (0:00:00.289) 0:00:00.891 ********* 2026-03-17 01:02:32.652279 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:02:32.652283 | orchestrator | 2026-03-17 01:02:32.652286 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-03-17 01:02:32.652290 | orchestrator | Tuesday 17 March 2026 00:59:58 +0000 (0:00:00.710) 0:00:01.601 ********* 2026-03-17 01:02:32.652304 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-17 01:02:32.652323 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-17 01:02:32.652404 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-17 01:02:32.652412 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-17 01:02:32.652419 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-17 01:02:32.652441 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-17 01:02:32.652457 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-17 01:02:32.652463 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-17 01:02:32.652470 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-17 01:02:32.652476 | orchestrator | 2026-03-17 01:02:32.652482 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-03-17 01:02:32.652488 | orchestrator | Tuesday 17 March 2026 01:00:00 +0000 (0:00:02.150) 0:00:03.752 ********* 2026-03-17 01:02:32.652494 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:32.652501 | orchestrator | 2026-03-17 01:02:32.652506 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-03-17 01:02:32.652512 | orchestrator | Tuesday 17 March 2026 01:00:00 +0000 (0:00:00.117) 0:00:03.869 ********* 2026-03-17 01:02:32.652517 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:32.652522 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:32.652528 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:32.652535 | orchestrator | 2026-03-17 01:02:32.652541 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-03-17 01:02:32.652547 | orchestrator | Tuesday 17 March 2026 01:00:01 +0000 (0:00:00.265) 0:00:04.135 ********* 2026-03-17 01:02:32.652553 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-17 01:02:32.652559 | orchestrator | 2026-03-17 01:02:32.652566 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-17 01:02:32.652572 | orchestrator | Tuesday 17 March 2026 01:00:01 +0000 (0:00:00.864) 0:00:05.000 ********* 2026-03-17 01:02:32.652578 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:02:32.652583 | orchestrator | 2026-03-17 01:02:32.652587 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-03-17 01:02:32.652591 | orchestrator | Tuesday 17 March 2026 01:00:02 +0000 (0:00:00.657) 0:00:05.657 ********* 2026-03-17 01:02:32.652613 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-17 01:02:32.652624 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-17 01:02:32.652629 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-17 01:02:32.652633 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-17 01:02:32.652637 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-17 01:02:32.652641 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-17 01:02:32.652651 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-17 01:02:32.652657 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-17 01:02:32.652661 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-17 01:02:32.652665 | orchestrator | 2026-03-17 01:02:32.652669 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-03-17 01:02:32.652673 | orchestrator | Tuesday 17 March 2026 01:00:05 +0000 (0:00:03.159) 0:00:08.817 ********* 2026-03-17 01:02:32.652677 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-17 01:02:32.652681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-17 01:02:32.652687 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-17 01:02:32.652691 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:32.652705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-17 01:02:32.652710 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-17 01:02:32.652714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-17 01:02:32.652733 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:32.652738 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-17 01:02:32.652746 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-17 01:02:32.652753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-17 01:02:32.652757 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:32.652761 | orchestrator | 2026-03-17 01:02:32.652765 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-03-17 01:02:32.652769 | orchestrator | Tuesday 17 March 2026 01:00:06 +0000 (0:00:00.476) 0:00:09.294 ********* 2026-03-17 01:02:32.652775 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-17 01:02:32.652779 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-17 01:02:32.652783 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-17 01:02:32.652790 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:32.652794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-17 01:02:32.652802 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh2026-03-17 01:02:32 | INFO  | Task f59d8695-f998-4f4e-85cb-e6817cd7cbc1 is in state SUCCESS 2026-03-17 01:02:32.652807 | orchestrator | :2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-17 01:02:32.652813 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-17 01:02:32.652817 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:32.652821 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-17 01:02:32.652825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-17 01:02:32.652834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-17 01:02:32.652841 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:32.652848 | orchestrator | 2026-03-17 01:02:32.652854 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-03-17 01:02:32.652861 | orchestrator | Tuesday 17 March 2026 01:00:07 +0000 (0:00:00.776) 0:00:10.071 ********* 2026-03-17 01:02:32.652872 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-17 01:02:32.652883 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-17 01:02:32.652891 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-17 01:02:32.652903 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-17 01:02:32.652910 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-17 01:02:32.652918 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-17 01:02:32.652925 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-17 01:02:32.652929 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-17 01:02:32.652933 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-17 01:02:32.652939 | orchestrator | 2026-03-17 01:02:32.652943 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-03-17 01:02:32.652947 | orchestrator | Tuesday 17 March 2026 01:00:10 +0000 (0:00:03.148) 0:00:13.220 ********* 2026-03-17 01:02:32.652951 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-17 01:02:32.652955 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-17 01:02:32.652964 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-17 01:02:32.652968 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-17 01:02:32.652973 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-17 01:02:32.652979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-17 01:02:32.652983 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-17 01:02:32.652990 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-17 01:02:32.652996 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-17 01:02:32.653000 | orchestrator | 2026-03-17 01:02:32.653004 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-03-17 01:02:32.653007 | orchestrator | Tuesday 17 March 2026 01:00:14 +0000 (0:00:04.711) 0:00:17.931 ********* 2026-03-17 01:02:32.653011 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:02:32.653015 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:02:32.653019 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:02:32.653023 | orchestrator | 2026-03-17 01:02:32.653027 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-03-17 01:02:32.653030 | orchestrator | Tuesday 17 March 2026 01:00:16 +0000 (0:00:01.389) 0:00:19.321 ********* 2026-03-17 01:02:32.653034 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:32.653039 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:32.653045 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:32.653050 | orchestrator | 2026-03-17 01:02:32.653054 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-03-17 01:02:32.653059 | orchestrator | Tuesday 17 March 2026 01:00:17 +0000 (0:00:01.025) 0:00:20.347 ********* 2026-03-17 01:02:32.653063 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:32.653067 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:32.653071 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:32.653076 | orchestrator | 2026-03-17 01:02:32.653080 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-03-17 01:02:32.653084 | orchestrator | Tuesday 17 March 2026 01:00:17 +0000 (0:00:00.338) 0:00:20.686 ********* 2026-03-17 01:02:32.653089 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:32.653093 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:32.653097 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:32.653101 | orchestrator | 2026-03-17 01:02:32.653106 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-03-17 01:02:32.653110 | orchestrator | Tuesday 17 March 2026 01:00:17 +0000 (0:00:00.304) 0:00:20.991 ********* 2026-03-17 01:02:32.653115 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-17 01:02:32.653120 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-17 01:02:32.653127 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-17 01:02:32.653131 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:32.653138 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-17 01:02:32.653146 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-17 01:02:32.653151 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-17 01:02:32.653155 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:32.653160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-17 01:02:32.653167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-17 01:02:32.653174 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-17 01:02:32.653181 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:32.653185 | orchestrator | 2026-03-17 01:02:32.653190 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-17 01:02:32.653195 | orchestrator | Tuesday 17 March 2026 01:00:18 +0000 (0:00:00.702) 0:00:21.693 ********* 2026-03-17 01:02:32.653199 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:32.653203 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:32.653208 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:32.653212 | orchestrator | 2026-03-17 01:02:32.653216 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-03-17 01:02:32.653220 | orchestrator | Tuesday 17 March 2026 01:00:19 +0000 (0:00:00.452) 0:00:22.146 ********* 2026-03-17 01:02:32.653225 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-17 01:02:32.653229 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-17 01:02:32.653234 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-17 01:02:32.653238 | orchestrator | 2026-03-17 01:02:32.653243 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-03-17 01:02:32.653247 | orchestrator | Tuesday 17 March 2026 01:00:21 +0000 (0:00:02.061) 0:00:24.208 ********* 2026-03-17 01:02:32.653251 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-17 01:02:32.653256 | orchestrator | 2026-03-17 01:02:32.653260 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-03-17 01:02:32.653264 | orchestrator | Tuesday 17 March 2026 01:00:22 +0000 (0:00:00.957) 0:00:25.166 ********* 2026-03-17 01:02:32.653269 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:32.653273 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:32.653278 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:32.653282 | orchestrator | 2026-03-17 01:02:32.653287 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-03-17 01:02:32.653291 | orchestrator | Tuesday 17 March 2026 01:00:22 +0000 (0:00:00.521) 0:00:25.687 ********* 2026-03-17 01:02:32.653296 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-17 01:02:32.653300 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-17 01:02:32.653304 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-17 01:02:32.653309 | orchestrator | 2026-03-17 01:02:32.653312 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-03-17 01:02:32.653316 | orchestrator | Tuesday 17 March 2026 01:00:23 +0000 (0:00:01.057) 0:00:26.745 ********* 2026-03-17 01:02:32.653320 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:02:32.653324 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:02:32.653327 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:02:32.653331 | orchestrator | 2026-03-17 01:02:32.653335 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-03-17 01:02:32.653338 | orchestrator | Tuesday 17 March 2026 01:00:24 +0000 (0:00:00.440) 0:00:27.185 ********* 2026-03-17 01:02:32.653342 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-17 01:02:32.653346 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-17 01:02:32.653350 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-17 01:02:32.653353 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-17 01:02:32.653357 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-17 01:02:32.653361 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-17 01:02:32.653365 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-17 01:02:32.653371 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-17 01:02:32.653375 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-17 01:02:32.653378 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-17 01:02:32.653382 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-17 01:02:32.653386 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-17 01:02:32.653390 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-17 01:02:32.653399 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-17 01:02:32.653405 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-17 01:02:32.653411 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-17 01:02:32.653418 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-17 01:02:32.653425 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-17 01:02:32.653432 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-17 01:02:32.653438 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-17 01:02:32.653448 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-17 01:02:32.653452 | orchestrator | 2026-03-17 01:02:32.653456 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-03-17 01:02:32.653459 | orchestrator | Tuesday 17 March 2026 01:00:31 +0000 (0:00:07.839) 0:00:35.024 ********* 2026-03-17 01:02:32.653463 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-17 01:02:32.653467 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-17 01:02:32.653470 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-17 01:02:32.653474 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-17 01:02:32.653478 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-17 01:02:32.653482 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-17 01:02:32.653485 | orchestrator | 2026-03-17 01:02:32.653489 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2026-03-17 01:02:32.653493 | orchestrator | Tuesday 17 March 2026 01:00:34 +0000 (0:00:02.369) 0:00:37.394 ********* 2026-03-17 01:02:32.653497 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-17 01:02:32.653504 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-17 01:02:32.653511 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-17 01:02:32.653517 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-17 01:02:32.653522 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-17 01:02:32.653526 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-17 01:02:32.653530 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-17 01:02:32.653536 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-17 01:02:32.653542 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-17 01:02:32.653546 | orchestrator | 2026-03-17 01:02:32.653550 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-17 01:02:32.653554 | orchestrator | Tuesday 17 March 2026 01:00:36 +0000 (0:00:02.530) 0:00:39.924 ********* 2026-03-17 01:02:32.653558 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:32.653561 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:32.653565 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:32.653569 | orchestrator | 2026-03-17 01:02:32.653572 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-03-17 01:02:32.653576 | orchestrator | Tuesday 17 March 2026 01:00:37 +0000 (0:00:00.258) 0:00:40.183 ********* 2026-03-17 01:02:32.653580 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:02:32.653584 | orchestrator | 2026-03-17 01:02:32.653587 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-03-17 01:02:32.653593 | orchestrator | Tuesday 17 March 2026 01:00:39 +0000 (0:00:02.645) 0:00:42.828 ********* 2026-03-17 01:02:32.653597 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:02:32.653601 | orchestrator | 2026-03-17 01:02:32.653605 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-03-17 01:02:32.653608 | orchestrator | Tuesday 17 March 2026 01:00:42 +0000 (0:00:02.345) 0:00:45.173 ********* 2026-03-17 01:02:32.653612 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:02:32.653616 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:02:32.653619 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:02:32.653623 | orchestrator | 2026-03-17 01:02:32.653627 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-03-17 01:02:32.653631 | orchestrator | Tuesday 17 March 2026 01:00:43 +0000 (0:00:00.888) 0:00:46.062 ********* 2026-03-17 01:02:32.653634 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:02:32.653638 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:02:32.653642 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:02:32.653646 | orchestrator | 2026-03-17 01:02:32.653649 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-03-17 01:02:32.653653 | orchestrator | Tuesday 17 March 2026 01:00:43 +0000 (0:00:00.308) 0:00:46.371 ********* 2026-03-17 01:02:32.653657 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:32.653663 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:32.653667 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:32.653671 | orchestrator | 2026-03-17 01:02:32.653674 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-03-17 01:02:32.653678 | orchestrator | Tuesday 17 March 2026 01:00:43 +0000 (0:00:00.316) 0:00:46.687 ********* 2026-03-17 01:02:32.653682 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:02:32.653686 | orchestrator | 2026-03-17 01:02:32.653689 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2026-03-17 01:02:32.653693 | orchestrator | Tuesday 17 March 2026 01:00:59 +0000 (0:00:15.555) 0:01:02.243 ********* 2026-03-17 01:02:32.653697 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:02:32.653701 | orchestrator | 2026-03-17 01:02:32.653704 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-17 01:02:32.653708 | orchestrator | Tuesday 17 March 2026 01:01:10 +0000 (0:00:11.234) 0:01:13.477 ********* 2026-03-17 01:02:32.653712 | orchestrator | 2026-03-17 01:02:32.653715 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-17 01:02:32.653737 | orchestrator | Tuesday 17 March 2026 01:01:10 +0000 (0:00:00.092) 0:01:13.570 ********* 2026-03-17 01:02:32.653742 | orchestrator | 2026-03-17 01:02:32.653746 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-17 01:02:32.653749 | orchestrator | Tuesday 17 March 2026 01:01:10 +0000 (0:00:00.061) 0:01:13.631 ********* 2026-03-17 01:02:32.653753 | orchestrator | 2026-03-17 01:02:32.653757 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-03-17 01:02:32.653761 | orchestrator | Tuesday 17 March 2026 01:01:10 +0000 (0:00:00.061) 0:01:13.693 ********* 2026-03-17 01:02:32.653764 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:02:32.653768 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:02:32.653772 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:02:32.653775 | orchestrator | 2026-03-17 01:02:32.653779 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-03-17 01:02:32.653783 | orchestrator | Tuesday 17 March 2026 01:01:19 +0000 (0:00:08.635) 0:01:22.329 ********* 2026-03-17 01:02:32.653787 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:02:32.653790 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:02:32.653794 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:02:32.653798 | orchestrator | 2026-03-17 01:02:32.653802 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-03-17 01:02:32.653805 | orchestrator | Tuesday 17 March 2026 01:01:28 +0000 (0:00:09.679) 0:01:32.008 ********* 2026-03-17 01:02:32.653809 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:02:32.653813 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:02:32.653816 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:02:32.653820 | orchestrator | 2026-03-17 01:02:32.653824 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-17 01:02:32.653828 | orchestrator | Tuesday 17 March 2026 01:01:35 +0000 (0:00:06.246) 0:01:38.255 ********* 2026-03-17 01:02:32.653831 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:02:32.653835 | orchestrator | 2026-03-17 01:02:32.653839 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2026-03-17 01:02:32.653843 | orchestrator | Tuesday 17 March 2026 01:01:35 +0000 (0:00:00.691) 0:01:38.947 ********* 2026-03-17 01:02:32.653848 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:02:32.653854 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:02:32.653861 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:02:32.653868 | orchestrator | 2026-03-17 01:02:32.653874 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2026-03-17 01:02:32.653881 | orchestrator | Tuesday 17 March 2026 01:01:36 +0000 (0:00:00.682) 0:01:39.629 ********* 2026-03-17 01:02:32.653888 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:02:32.653895 | orchestrator | 2026-03-17 01:02:32.653906 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2026-03-17 01:02:32.653913 | orchestrator | Tuesday 17 March 2026 01:01:38 +0000 (0:00:01.593) 0:01:41.223 ********* 2026-03-17 01:02:32.653917 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2026-03-17 01:02:32.653921 | orchestrator | 2026-03-17 01:02:32.653924 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2026-03-17 01:02:32.653928 | orchestrator | Tuesday 17 March 2026 01:01:51 +0000 (0:00:13.246) 0:01:54.469 ********* 2026-03-17 01:02:32.653932 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2026-03-17 01:02:32.653935 | orchestrator | 2026-03-17 01:02:32.653939 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2026-03-17 01:02:32.653943 | orchestrator | Tuesday 17 March 2026 01:02:20 +0000 (0:00:29.042) 0:02:23.512 ********* 2026-03-17 01:02:32.653947 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2026-03-17 01:02:32.653953 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2026-03-17 01:02:32.653957 | orchestrator | 2026-03-17 01:02:32.653961 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2026-03-17 01:02:32.653964 | orchestrator | Tuesday 17 March 2026 01:02:28 +0000 (0:00:07.605) 0:02:31.117 ********* 2026-03-17 01:02:32.653968 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:32.653972 | orchestrator | 2026-03-17 01:02:32.653975 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2026-03-17 01:02:32.653979 | orchestrator | Tuesday 17 March 2026 01:02:28 +0000 (0:00:00.123) 0:02:31.240 ********* 2026-03-17 01:02:32.653983 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:32.653987 | orchestrator | 2026-03-17 01:02:32.653990 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2026-03-17 01:02:32.653994 | orchestrator | Tuesday 17 March 2026 01:02:28 +0000 (0:00:00.112) 0:02:31.352 ********* 2026-03-17 01:02:32.653998 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:32.654001 | orchestrator | 2026-03-17 01:02:32.654005 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2026-03-17 01:02:32.654009 | orchestrator | Tuesday 17 March 2026 01:02:28 +0000 (0:00:00.098) 0:02:31.451 ********* 2026-03-17 01:02:32.654036 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:32.654040 | orchestrator | 2026-03-17 01:02:32.654044 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2026-03-17 01:02:32.654047 | orchestrator | Tuesday 17 March 2026 01:02:28 +0000 (0:00:00.278) 0:02:31.730 ********* 2026-03-17 01:02:32.654051 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:02:32.654055 | orchestrator | 2026-03-17 01:02:32.654059 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-17 01:02:32.654062 | orchestrator | Tuesday 17 March 2026 01:02:31 +0000 (0:00:03.129) 0:02:34.859 ********* 2026-03-17 01:02:32.654066 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:32.654070 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:32.654074 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:32.654077 | orchestrator | 2026-03-17 01:02:32.654081 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 01:02:32.654085 | orchestrator | testbed-node-0 : ok=33  changed=19  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-17 01:02:32.654102 | orchestrator | testbed-node-1 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-17 01:02:32.654107 | orchestrator | testbed-node-2 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-17 01:02:32.654110 | orchestrator | 2026-03-17 01:02:32.654114 | orchestrator | 2026-03-17 01:02:32.654118 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 01:02:32.654122 | orchestrator | Tuesday 17 March 2026 01:02:32 +0000 (0:00:00.468) 0:02:35.328 ********* 2026-03-17 01:02:32.654128 | orchestrator | =============================================================================== 2026-03-17 01:02:32.654132 | orchestrator | service-ks-register : keystone | Creating services --------------------- 29.04s 2026-03-17 01:02:32.654136 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 15.56s 2026-03-17 01:02:32.654140 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 13.25s 2026-03-17 01:02:32.654143 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 11.24s 2026-03-17 01:02:32.654147 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 9.68s 2026-03-17 01:02:32.654151 | orchestrator | keystone : Restart keystone-ssh container ------------------------------- 8.64s 2026-03-17 01:02:32.654155 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 7.84s 2026-03-17 01:02:32.654158 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 7.61s 2026-03-17 01:02:32.654162 | orchestrator | keystone : Restart keystone container ----------------------------------- 6.25s 2026-03-17 01:02:32.654166 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 4.71s 2026-03-17 01:02:32.654169 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.16s 2026-03-17 01:02:32.654173 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.15s 2026-03-17 01:02:32.654177 | orchestrator | keystone : Creating default user role ----------------------------------- 3.13s 2026-03-17 01:02:32.654180 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.65s 2026-03-17 01:02:32.654184 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.53s 2026-03-17 01:02:32.654188 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.37s 2026-03-17 01:02:32.654195 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.35s 2026-03-17 01:02:32.654199 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 2.15s 2026-03-17 01:02:32.654202 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 2.06s 2026-03-17 01:02:32.654206 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.59s 2026-03-17 01:02:32.654210 | orchestrator | 2026-03-17 01:02:32 | INFO  | Task 3d989fbe-ca8b-40d6-847d-a926826ecbdf is in state STARTED 2026-03-17 01:02:32.654214 | orchestrator | 2026-03-17 01:02:32 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:02:35.676181 | orchestrator | 2026-03-17 01:02:35 | INFO  | Task e3e68071-ee69-40e5-85b5-80cdaf3bb767 is in state STARTED 2026-03-17 01:02:35.676585 | orchestrator | 2026-03-17 01:02:35 | INFO  | Task d5ed5bbb-fc05-4b8b-9e99-722ab86920c6 is in state STARTED 2026-03-17 01:02:35.677329 | orchestrator | 2026-03-17 01:02:35 | INFO  | Task c3cc2842-a16d-4c05-a2ed-d232c990b15f is in state STARTED 2026-03-17 01:02:35.677910 | orchestrator | 2026-03-17 01:02:35 | INFO  | Task c1f2eaee-b1db-4d9b-a3f1-615c59e641bf is in state STARTED 2026-03-17 01:02:35.678576 | orchestrator | 2026-03-17 01:02:35 | INFO  | Task 3d989fbe-ca8b-40d6-847d-a926826ecbdf is in state STARTED 2026-03-17 01:02:35.678599 | orchestrator | 2026-03-17 01:02:35 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:02:38.705390 | orchestrator | 2026-03-17 01:02:38 | INFO  | Task e3e68071-ee69-40e5-85b5-80cdaf3bb767 is in state STARTED 2026-03-17 01:02:38.706370 | orchestrator | 2026-03-17 01:02:38 | INFO  | Task d5ed5bbb-fc05-4b8b-9e99-722ab86920c6 is in state STARTED 2026-03-17 01:02:38.707342 | orchestrator | 2026-03-17 01:02:38 | INFO  | Task c3cc2842-a16d-4c05-a2ed-d232c990b15f is in state STARTED 2026-03-17 01:02:38.708496 | orchestrator | 2026-03-17 01:02:38 | INFO  | Task c1f2eaee-b1db-4d9b-a3f1-615c59e641bf is in state STARTED 2026-03-17 01:02:38.709322 | orchestrator | 2026-03-17 01:02:38 | INFO  | Task 3d989fbe-ca8b-40d6-847d-a926826ecbdf is in state STARTED 2026-03-17 01:02:38.709402 | orchestrator | 2026-03-17 01:02:38 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:02:41.748880 | orchestrator | 2026-03-17 01:02:41 | INFO  | Task e3e68071-ee69-40e5-85b5-80cdaf3bb767 is in state STARTED 2026-03-17 01:02:41.750752 | orchestrator | 2026-03-17 01:02:41 | INFO  | Task d5ed5bbb-fc05-4b8b-9e99-722ab86920c6 is in state STARTED 2026-03-17 01:02:41.752215 | orchestrator | 2026-03-17 01:02:41 | INFO  | Task c3cc2842-a16d-4c05-a2ed-d232c990b15f is in state STARTED 2026-03-17 01:02:41.753909 | orchestrator | 2026-03-17 01:02:41 | INFO  | Task c1f2eaee-b1db-4d9b-a3f1-615c59e641bf is in state STARTED 2026-03-17 01:02:41.755435 | orchestrator | 2026-03-17 01:02:41 | INFO  | Task 3d989fbe-ca8b-40d6-847d-a926826ecbdf is in state STARTED 2026-03-17 01:02:41.755483 | orchestrator | 2026-03-17 01:02:41 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:02:44.801276 | orchestrator | 2026-03-17 01:02:44 | INFO  | Task e3e68071-ee69-40e5-85b5-80cdaf3bb767 is in state STARTED 2026-03-17 01:02:44.803419 | orchestrator | 2026-03-17 01:02:44 | INFO  | Task d5ed5bbb-fc05-4b8b-9e99-722ab86920c6 is in state STARTED 2026-03-17 01:02:44.805254 | orchestrator | 2026-03-17 01:02:44 | INFO  | Task c3cc2842-a16d-4c05-a2ed-d232c990b15f is in state STARTED 2026-03-17 01:02:44.806995 | orchestrator | 2026-03-17 01:02:44 | INFO  | Task c1f2eaee-b1db-4d9b-a3f1-615c59e641bf is in state STARTED 2026-03-17 01:02:44.807928 | orchestrator | 2026-03-17 01:02:44 | INFO  | Task 3d989fbe-ca8b-40d6-847d-a926826ecbdf is in state STARTED 2026-03-17 01:02:44.807974 | orchestrator | 2026-03-17 01:02:44 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:02:47.861985 | orchestrator | 2026-03-17 01:02:47 | INFO  | Task e3e68071-ee69-40e5-85b5-80cdaf3bb767 is in state STARTED 2026-03-17 01:02:47.865344 | orchestrator | 2026-03-17 01:02:47 | INFO  | Task d5ed5bbb-fc05-4b8b-9e99-722ab86920c6 is in state STARTED 2026-03-17 01:02:47.867976 | orchestrator | 2026-03-17 01:02:47 | INFO  | Task c3cc2842-a16d-4c05-a2ed-d232c990b15f is in state STARTED 2026-03-17 01:02:47.870690 | orchestrator | 2026-03-17 01:02:47 | INFO  | Task c1f2eaee-b1db-4d9b-a3f1-615c59e641bf is in state STARTED 2026-03-17 01:02:47.872804 | orchestrator | 2026-03-17 01:02:47 | INFO  | Task 3d989fbe-ca8b-40d6-847d-a926826ecbdf is in state STARTED 2026-03-17 01:02:47.872847 | orchestrator | 2026-03-17 01:02:47 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:02:50.919018 | orchestrator | 2026-03-17 01:02:50 | INFO  | Task e3e68071-ee69-40e5-85b5-80cdaf3bb767 is in state STARTED 2026-03-17 01:02:50.920984 | orchestrator | 2026-03-17 01:02:50 | INFO  | Task d5ed5bbb-fc05-4b8b-9e99-722ab86920c6 is in state STARTED 2026-03-17 01:02:50.924014 | orchestrator | 2026-03-17 01:02:50 | INFO  | Task c3cc2842-a16d-4c05-a2ed-d232c990b15f is in state STARTED 2026-03-17 01:02:50.926672 | orchestrator | 2026-03-17 01:02:50 | INFO  | Task c1f2eaee-b1db-4d9b-a3f1-615c59e641bf is in state STARTED 2026-03-17 01:02:50.929170 | orchestrator | 2026-03-17 01:02:50 | INFO  | Task 3d989fbe-ca8b-40d6-847d-a926826ecbdf is in state STARTED 2026-03-17 01:02:50.929677 | orchestrator | 2026-03-17 01:02:50 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:02:53.965854 | orchestrator | 2026-03-17 01:02:53 | INFO  | Task e3e68071-ee69-40e5-85b5-80cdaf3bb767 is in state STARTED 2026-03-17 01:02:53.968184 | orchestrator | 2026-03-17 01:02:53 | INFO  | Task d5ed5bbb-fc05-4b8b-9e99-722ab86920c6 is in state STARTED 2026-03-17 01:02:53.969844 | orchestrator | 2026-03-17 01:02:53 | INFO  | Task cd2f54ad-5871-4f33-bbc5-9ac126987aec is in state STARTED 2026-03-17 01:02:53.972573 | orchestrator | 2026-03-17 01:02:53 | INFO  | Task c3cc2842-a16d-4c05-a2ed-d232c990b15f is in state STARTED 2026-03-17 01:02:53.974519 | orchestrator | 2026-03-17 01:02:53 | INFO  | Task c1f2eaee-b1db-4d9b-a3f1-615c59e641bf is in state STARTED 2026-03-17 01:02:53.977318 | orchestrator | 2026-03-17 01:02:53 | INFO  | Task 3d989fbe-ca8b-40d6-847d-a926826ecbdf is in state SUCCESS 2026-03-17 01:02:53.977551 | orchestrator | 2026-03-17 01:02:53 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:02:57.021204 | orchestrator | 2026-03-17 01:02:57 | INFO  | Task e3e68071-ee69-40e5-85b5-80cdaf3bb767 is in state STARTED 2026-03-17 01:02:57.023865 | orchestrator | 2026-03-17 01:02:57 | INFO  | Task d5ed5bbb-fc05-4b8b-9e99-722ab86920c6 is in state STARTED 2026-03-17 01:02:57.026236 | orchestrator | 2026-03-17 01:02:57 | INFO  | Task cd2f54ad-5871-4f33-bbc5-9ac126987aec is in state STARTED 2026-03-17 01:02:57.029750 | orchestrator | 2026-03-17 01:02:57 | INFO  | Task c3cc2842-a16d-4c05-a2ed-d232c990b15f is in state STARTED 2026-03-17 01:02:57.030475 | orchestrator | 2026-03-17 01:02:57 | INFO  | Task c1f2eaee-b1db-4d9b-a3f1-615c59e641bf is in state STARTED 2026-03-17 01:02:57.030519 | orchestrator | 2026-03-17 01:02:57 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:03:00.068944 | orchestrator | 2026-03-17 01:03:00 | INFO  | Task e3e68071-ee69-40e5-85b5-80cdaf3bb767 is in state STARTED 2026-03-17 01:03:00.069831 | orchestrator | 2026-03-17 01:03:00 | INFO  | Task d5ed5bbb-fc05-4b8b-9e99-722ab86920c6 is in state STARTED 2026-03-17 01:03:00.070842 | orchestrator | 2026-03-17 01:03:00 | INFO  | Task cd2f54ad-5871-4f33-bbc5-9ac126987aec is in state STARTED 2026-03-17 01:03:00.071637 | orchestrator | 2026-03-17 01:03:00 | INFO  | Task c3cc2842-a16d-4c05-a2ed-d232c990b15f is in state STARTED 2026-03-17 01:03:00.074099 | orchestrator | 2026-03-17 01:03:00 | INFO  | Task c1f2eaee-b1db-4d9b-a3f1-615c59e641bf is in state STARTED 2026-03-17 01:03:00.074150 | orchestrator | 2026-03-17 01:03:00 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:03:03.124606 | orchestrator | 2026-03-17 01:03:03 | INFO  | Task e3e68071-ee69-40e5-85b5-80cdaf3bb767 is in state STARTED 2026-03-17 01:03:03.127131 | orchestrator | 2026-03-17 01:03:03 | INFO  | Task d5ed5bbb-fc05-4b8b-9e99-722ab86920c6 is in state STARTED 2026-03-17 01:03:03.127201 | orchestrator | 2026-03-17 01:03:03 | INFO  | Task cd2f54ad-5871-4f33-bbc5-9ac126987aec is in state STARTED 2026-03-17 01:03:03.127206 | orchestrator | 2026-03-17 01:03:03 | INFO  | Task c3cc2842-a16d-4c05-a2ed-d232c990b15f is in state STARTED 2026-03-17 01:03:03.127774 | orchestrator | 2026-03-17 01:03:03 | INFO  | Task c1f2eaee-b1db-4d9b-a3f1-615c59e641bf is in state STARTED 2026-03-17 01:03:03.127827 | orchestrator | 2026-03-17 01:03:03 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:03:06.172925 | orchestrator | 2026-03-17 01:03:06 | INFO  | Task e3e68071-ee69-40e5-85b5-80cdaf3bb767 is in state STARTED 2026-03-17 01:03:06.174937 | orchestrator | 2026-03-17 01:03:06 | INFO  | Task d5ed5bbb-fc05-4b8b-9e99-722ab86920c6 is in state STARTED 2026-03-17 01:03:06.176744 | orchestrator | 2026-03-17 01:03:06 | INFO  | Task cd2f54ad-5871-4f33-bbc5-9ac126987aec is in state STARTED 2026-03-17 01:03:06.178277 | orchestrator | 2026-03-17 01:03:06 | INFO  | Task c3cc2842-a16d-4c05-a2ed-d232c990b15f is in state STARTED 2026-03-17 01:03:06.179869 | orchestrator | 2026-03-17 01:03:06 | INFO  | Task c1f2eaee-b1db-4d9b-a3f1-615c59e641bf is in state STARTED 2026-03-17 01:03:06.179920 | orchestrator | 2026-03-17 01:03:06 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:03:09.214158 | orchestrator | 2026-03-17 01:03:09 | INFO  | Task e3e68071-ee69-40e5-85b5-80cdaf3bb767 is in state STARTED 2026-03-17 01:03:09.217419 | orchestrator | 2026-03-17 01:03:09 | INFO  | Task d5ed5bbb-fc05-4b8b-9e99-722ab86920c6 is in state STARTED 2026-03-17 01:03:09.219403 | orchestrator | 2026-03-17 01:03:09 | INFO  | Task cd2f54ad-5871-4f33-bbc5-9ac126987aec is in state STARTED 2026-03-17 01:03:09.221271 | orchestrator | 2026-03-17 01:03:09 | INFO  | Task c3cc2842-a16d-4c05-a2ed-d232c990b15f is in state STARTED 2026-03-17 01:03:09.223271 | orchestrator | 2026-03-17 01:03:09 | INFO  | Task c1f2eaee-b1db-4d9b-a3f1-615c59e641bf is in state STARTED 2026-03-17 01:03:09.223331 | orchestrator | 2026-03-17 01:03:09 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:03:12.265167 | orchestrator | 2026-03-17 01:03:12 | INFO  | Task e3e68071-ee69-40e5-85b5-80cdaf3bb767 is in state STARTED 2026-03-17 01:03:12.265922 | orchestrator | 2026-03-17 01:03:12 | INFO  | Task d5ed5bbb-fc05-4b8b-9e99-722ab86920c6 is in state STARTED 2026-03-17 01:03:12.266863 | orchestrator | 2026-03-17 01:03:12 | INFO  | Task cd2f54ad-5871-4f33-bbc5-9ac126987aec is in state STARTED 2026-03-17 01:03:12.267833 | orchestrator | 2026-03-17 01:03:12 | INFO  | Task c3cc2842-a16d-4c05-a2ed-d232c990b15f is in state STARTED 2026-03-17 01:03:12.268883 | orchestrator | 2026-03-17 01:03:12 | INFO  | Task c1f2eaee-b1db-4d9b-a3f1-615c59e641bf is in state STARTED 2026-03-17 01:03:12.268993 | orchestrator | 2026-03-17 01:03:12 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:03:15.299624 | orchestrator | 2026-03-17 01:03:15 | INFO  | Task e3e68071-ee69-40e5-85b5-80cdaf3bb767 is in state STARTED 2026-03-17 01:03:15.299726 | orchestrator | 2026-03-17 01:03:15 | INFO  | Task d5ed5bbb-fc05-4b8b-9e99-722ab86920c6 is in state STARTED 2026-03-17 01:03:15.300590 | orchestrator | 2026-03-17 01:03:15 | INFO  | Task cd2f54ad-5871-4f33-bbc5-9ac126987aec is in state STARTED 2026-03-17 01:03:15.301366 | orchestrator | 2026-03-17 01:03:15 | INFO  | Task c3cc2842-a16d-4c05-a2ed-d232c990b15f is in state STARTED 2026-03-17 01:03:15.301968 | orchestrator | 2026-03-17 01:03:15 | INFO  | Task c1f2eaee-b1db-4d9b-a3f1-615c59e641bf is in state STARTED 2026-03-17 01:03:15.302142 | orchestrator | 2026-03-17 01:03:15 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:03:18.334280 | orchestrator | 2026-03-17 01:03:18 | INFO  | Task e3e68071-ee69-40e5-85b5-80cdaf3bb767 is in state STARTED 2026-03-17 01:03:18.334330 | orchestrator | 2026-03-17 01:03:18 | INFO  | Task d5ed5bbb-fc05-4b8b-9e99-722ab86920c6 is in state STARTED 2026-03-17 01:03:18.334947 | orchestrator | 2026-03-17 01:03:18 | INFO  | Task cd2f54ad-5871-4f33-bbc5-9ac126987aec is in state STARTED 2026-03-17 01:03:18.335771 | orchestrator | 2026-03-17 01:03:18 | INFO  | Task c3cc2842-a16d-4c05-a2ed-d232c990b15f is in state STARTED 2026-03-17 01:03:18.336597 | orchestrator | 2026-03-17 01:03:18 | INFO  | Task c1f2eaee-b1db-4d9b-a3f1-615c59e641bf is in state STARTED 2026-03-17 01:03:18.336631 | orchestrator | 2026-03-17 01:03:18 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:03:21.369772 | orchestrator | 2026-03-17 01:03:21 | INFO  | Task e3e68071-ee69-40e5-85b5-80cdaf3bb767 is in state STARTED 2026-03-17 01:03:21.370622 | orchestrator | 2026-03-17 01:03:21 | INFO  | Task d5ed5bbb-fc05-4b8b-9e99-722ab86920c6 is in state STARTED 2026-03-17 01:03:21.371570 | orchestrator | 2026-03-17 01:03:21 | INFO  | Task cd2f54ad-5871-4f33-bbc5-9ac126987aec is in state STARTED 2026-03-17 01:03:21.372851 | orchestrator | 2026-03-17 01:03:21 | INFO  | Task c3cc2842-a16d-4c05-a2ed-d232c990b15f is in state STARTED 2026-03-17 01:03:21.374111 | orchestrator | 2026-03-17 01:03:21 | INFO  | Task c1f2eaee-b1db-4d9b-a3f1-615c59e641bf is in state STARTED 2026-03-17 01:03:21.374146 | orchestrator | 2026-03-17 01:03:21 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:03:24.401781 | orchestrator | 2026-03-17 01:03:24 | INFO  | Task e3e68071-ee69-40e5-85b5-80cdaf3bb767 is in state STARTED 2026-03-17 01:03:24.402482 | orchestrator | 2026-03-17 01:03:24 | INFO  | Task d5ed5bbb-fc05-4b8b-9e99-722ab86920c6 is in state STARTED 2026-03-17 01:03:24.404522 | orchestrator | 2026-03-17 01:03:24 | INFO  | Task cd2f54ad-5871-4f33-bbc5-9ac126987aec is in state STARTED 2026-03-17 01:03:24.405209 | orchestrator | 2026-03-17 01:03:24 | INFO  | Task c3cc2842-a16d-4c05-a2ed-d232c990b15f is in state STARTED 2026-03-17 01:03:24.406085 | orchestrator | 2026-03-17 01:03:24 | INFO  | Task c1f2eaee-b1db-4d9b-a3f1-615c59e641bf is in state STARTED 2026-03-17 01:03:24.407354 | orchestrator | 2026-03-17 01:03:24 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:03:27.440698 | orchestrator | 2026-03-17 01:03:27 | INFO  | Task e3e68071-ee69-40e5-85b5-80cdaf3bb767 is in state STARTED 2026-03-17 01:03:27.441182 | orchestrator | 2026-03-17 01:03:27 | INFO  | Task d5ed5bbb-fc05-4b8b-9e99-722ab86920c6 is in state STARTED 2026-03-17 01:03:27.442086 | orchestrator | 2026-03-17 01:03:27 | INFO  | Task cd2f54ad-5871-4f33-bbc5-9ac126987aec is in state STARTED 2026-03-17 01:03:27.442717 | orchestrator | 2026-03-17 01:03:27 | INFO  | Task c3cc2842-a16d-4c05-a2ed-d232c990b15f is in state STARTED 2026-03-17 01:03:27.443639 | orchestrator | 2026-03-17 01:03:27 | INFO  | Task c1f2eaee-b1db-4d9b-a3f1-615c59e641bf is in state STARTED 2026-03-17 01:03:27.443664 | orchestrator | 2026-03-17 01:03:27 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:03:30.465803 | orchestrator | 2026-03-17 01:03:30 | INFO  | Task e3e68071-ee69-40e5-85b5-80cdaf3bb767 is in state STARTED 2026-03-17 01:03:30.466969 | orchestrator | 2026-03-17 01:03:30 | INFO  | Task d5ed5bbb-fc05-4b8b-9e99-722ab86920c6 is in state STARTED 2026-03-17 01:03:30.468074 | orchestrator | 2026-03-17 01:03:30 | INFO  | Task cd2f54ad-5871-4f33-bbc5-9ac126987aec is in state STARTED 2026-03-17 01:03:30.469368 | orchestrator | 2026-03-17 01:03:30 | INFO  | Task c3cc2842-a16d-4c05-a2ed-d232c990b15f is in state STARTED 2026-03-17 01:03:30.470541 | orchestrator | 2026-03-17 01:03:30 | INFO  | Task c1f2eaee-b1db-4d9b-a3f1-615c59e641bf is in state STARTED 2026-03-17 01:03:30.470961 | orchestrator | 2026-03-17 01:03:30 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:03:33.494105 | orchestrator | 2026-03-17 01:03:33 | INFO  | Task e3e68071-ee69-40e5-85b5-80cdaf3bb767 is in state STARTED 2026-03-17 01:03:33.494346 | orchestrator | 2026-03-17 01:03:33 | INFO  | Task d5ed5bbb-fc05-4b8b-9e99-722ab86920c6 is in state STARTED 2026-03-17 01:03:33.495172 | orchestrator | 2026-03-17 01:03:33 | INFO  | Task cd2f54ad-5871-4f33-bbc5-9ac126987aec is in state STARTED 2026-03-17 01:03:33.496519 | orchestrator | 2026-03-17 01:03:33 | INFO  | Task c3cc2842-a16d-4c05-a2ed-d232c990b15f is in state STARTED 2026-03-17 01:03:33.497380 | orchestrator | 2026-03-17 01:03:33 | INFO  | Task c1f2eaee-b1db-4d9b-a3f1-615c59e641bf is in state STARTED 2026-03-17 01:03:33.497417 | orchestrator | 2026-03-17 01:03:33 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:03:36.518835 | orchestrator | 2026-03-17 01:03:36 | INFO  | Task e3e68071-ee69-40e5-85b5-80cdaf3bb767 is in state STARTED 2026-03-17 01:03:36.519207 | orchestrator | 2026-03-17 01:03:36 | INFO  | Task d5ed5bbb-fc05-4b8b-9e99-722ab86920c6 is in state STARTED 2026-03-17 01:03:36.520622 | orchestrator | 2026-03-17 01:03:36 | INFO  | Task cd2f54ad-5871-4f33-bbc5-9ac126987aec is in state STARTED 2026-03-17 01:03:36.521186 | orchestrator | 2026-03-17 01:03:36 | INFO  | Task c3cc2842-a16d-4c05-a2ed-d232c990b15f is in state STARTED 2026-03-17 01:03:36.522235 | orchestrator | 2026-03-17 01:03:36 | INFO  | Task c1f2eaee-b1db-4d9b-a3f1-615c59e641bf is in state STARTED 2026-03-17 01:03:36.522284 | orchestrator | 2026-03-17 01:03:36 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:03:39.551281 | orchestrator | 2026-03-17 01:03:39 | INFO  | Task e3e68071-ee69-40e5-85b5-80cdaf3bb767 is in state STARTED 2026-03-17 01:03:39.551948 | orchestrator | 2026-03-17 01:03:39 | INFO  | Task d5ed5bbb-fc05-4b8b-9e99-722ab86920c6 is in state STARTED 2026-03-17 01:03:39.553627 | orchestrator | 2026-03-17 01:03:39 | INFO  | Task cd2f54ad-5871-4f33-bbc5-9ac126987aec is in state STARTED 2026-03-17 01:03:39.554216 | orchestrator | 2026-03-17 01:03:39 | INFO  | Task c3cc2842-a16d-4c05-a2ed-d232c990b15f is in state STARTED 2026-03-17 01:03:39.554631 | orchestrator | 2026-03-17 01:03:39 | INFO  | Task c1f2eaee-b1db-4d9b-a3f1-615c59e641bf is in state STARTED 2026-03-17 01:03:39.555029 | orchestrator | 2026-03-17 01:03:39 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:03:42.580615 | orchestrator | 2026-03-17 01:03:42 | INFO  | Task e3e68071-ee69-40e5-85b5-80cdaf3bb767 is in state STARTED 2026-03-17 01:03:42.580801 | orchestrator | 2026-03-17 01:03:42 | INFO  | Task d5ed5bbb-fc05-4b8b-9e99-722ab86920c6 is in state STARTED 2026-03-17 01:03:42.581606 | orchestrator | 2026-03-17 01:03:42 | INFO  | Task cd2f54ad-5871-4f33-bbc5-9ac126987aec is in state STARTED 2026-03-17 01:03:42.582399 | orchestrator | 2026-03-17 01:03:42 | INFO  | Task c3cc2842-a16d-4c05-a2ed-d232c990b15f is in state STARTED 2026-03-17 01:03:42.582946 | orchestrator | 2026-03-17 01:03:42 | INFO  | Task c1f2eaee-b1db-4d9b-a3f1-615c59e641bf is in state STARTED 2026-03-17 01:03:42.583027 | orchestrator | 2026-03-17 01:03:42 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:03:45.602691 | orchestrator | 2026-03-17 01:03:45 | INFO  | Task e3e68071-ee69-40e5-85b5-80cdaf3bb767 is in state STARTED 2026-03-17 01:03:45.603016 | orchestrator | 2026-03-17 01:03:45 | INFO  | Task d5ed5bbb-fc05-4b8b-9e99-722ab86920c6 is in state STARTED 2026-03-17 01:03:45.603603 | orchestrator | 2026-03-17 01:03:45 | INFO  | Task cd2f54ad-5871-4f33-bbc5-9ac126987aec is in state STARTED 2026-03-17 01:03:45.604357 | orchestrator | 2026-03-17 01:03:45 | INFO  | Task c3cc2842-a16d-4c05-a2ed-d232c990b15f is in state STARTED 2026-03-17 01:03:45.605030 | orchestrator | 2026-03-17 01:03:45 | INFO  | Task c1f2eaee-b1db-4d9b-a3f1-615c59e641bf is in state STARTED 2026-03-17 01:03:45.605070 | orchestrator | 2026-03-17 01:03:45 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:03:48.638439 | orchestrator | 2026-03-17 01:03:48 | INFO  | Task e3e68071-ee69-40e5-85b5-80cdaf3bb767 is in state STARTED 2026-03-17 01:03:48.638486 | orchestrator | 2026-03-17 01:03:48 | INFO  | Task d5ed5bbb-fc05-4b8b-9e99-722ab86920c6 is in state STARTED 2026-03-17 01:03:48.638491 | orchestrator | 2026-03-17 01:03:48 | INFO  | Task cd2f54ad-5871-4f33-bbc5-9ac126987aec is in state STARTED 2026-03-17 01:03:48.639227 | orchestrator | 2026-03-17 01:03:48 | INFO  | Task c3cc2842-a16d-4c05-a2ed-d232c990b15f is in state STARTED 2026-03-17 01:03:48.639761 | orchestrator | 2026-03-17 01:03:48 | INFO  | Task c1f2eaee-b1db-4d9b-a3f1-615c59e641bf is in state STARTED 2026-03-17 01:03:48.639785 | orchestrator | 2026-03-17 01:03:48 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:03:51.666997 | orchestrator | 2026-03-17 01:03:51 | INFO  | Task e3e68071-ee69-40e5-85b5-80cdaf3bb767 is in state STARTED 2026-03-17 01:03:51.667308 | orchestrator | 2026-03-17 01:03:51 | INFO  | Task d5ed5bbb-fc05-4b8b-9e99-722ab86920c6 is in state STARTED 2026-03-17 01:03:51.668134 | orchestrator | 2026-03-17 01:03:51 | INFO  | Task cd2f54ad-5871-4f33-bbc5-9ac126987aec is in state STARTED 2026-03-17 01:03:51.668845 | orchestrator | 2026-03-17 01:03:51 | INFO  | Task c3cc2842-a16d-4c05-a2ed-d232c990b15f is in state STARTED 2026-03-17 01:03:51.670776 | orchestrator | 2026-03-17 01:03:51 | INFO  | Task c1f2eaee-b1db-4d9b-a3f1-615c59e641bf is in state STARTED 2026-03-17 01:03:51.670805 | orchestrator | 2026-03-17 01:03:51 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:03:54.700829 | orchestrator | 2026-03-17 01:03:54 | INFO  | Task e3e68071-ee69-40e5-85b5-80cdaf3bb767 is in state STARTED 2026-03-17 01:03:54.701838 | orchestrator | 2026-03-17 01:03:54 | INFO  | Task d5ed5bbb-fc05-4b8b-9e99-722ab86920c6 is in state STARTED 2026-03-17 01:03:54.702630 | orchestrator | 2026-03-17 01:03:54 | INFO  | Task cd2f54ad-5871-4f33-bbc5-9ac126987aec is in state STARTED 2026-03-17 01:03:54.703932 | orchestrator | 2026-03-17 01:03:54 | INFO  | Task c3cc2842-a16d-4c05-a2ed-d232c990b15f is in state SUCCESS 2026-03-17 01:03:54.705403 | orchestrator | 2026-03-17 01:03:54 | INFO  | Task c1f2eaee-b1db-4d9b-a3f1-615c59e641bf is in state STARTED 2026-03-17 01:03:54.705436 | orchestrator | 2026-03-17 01:03:54 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:03:57.738240 | orchestrator | 2026-03-17 01:03:57 | INFO  | Task e3e68071-ee69-40e5-85b5-80cdaf3bb767 is in state STARTED 2026-03-17 01:03:57.739149 | orchestrator | 2026-03-17 01:03:57 | INFO  | Task d5ed5bbb-fc05-4b8b-9e99-722ab86920c6 is in state STARTED 2026-03-17 01:03:57.739828 | orchestrator | 2026-03-17 01:03:57 | INFO  | Task cd2f54ad-5871-4f33-bbc5-9ac126987aec is in state STARTED 2026-03-17 01:03:57.740892 | orchestrator | 2026-03-17 01:03:57 | INFO  | Task c1f2eaee-b1db-4d9b-a3f1-615c59e641bf is in state STARTED 2026-03-17 01:03:57.742099 | orchestrator | 2026-03-17 01:03:57 | INFO  | Task 82e88e0c-18a4-44f4-bb0d-dbde0b4f01b0 is in state STARTED 2026-03-17 01:03:57.742126 | orchestrator | 2026-03-17 01:03:57 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:04:00.769578 | orchestrator | 2026-03-17 01:04:00 | INFO  | Task e3e68071-ee69-40e5-85b5-80cdaf3bb767 is in state STARTED 2026-03-17 01:04:00.770523 | orchestrator | 2026-03-17 01:04:00 | INFO  | Task d5ed5bbb-fc05-4b8b-9e99-722ab86920c6 is in state STARTED 2026-03-17 01:04:00.772553 | orchestrator | 2026-03-17 01:04:00 | INFO  | Task cd2f54ad-5871-4f33-bbc5-9ac126987aec is in state STARTED 2026-03-17 01:04:00.774172 | orchestrator | 2026-03-17 01:04:00 | INFO  | Task c1f2eaee-b1db-4d9b-a3f1-615c59e641bf is in state STARTED 2026-03-17 01:04:00.774669 | orchestrator | 2026-03-17 01:04:00 | INFO  | Task 82e88e0c-18a4-44f4-bb0d-dbde0b4f01b0 is in state STARTED 2026-03-17 01:04:00.774696 | orchestrator | 2026-03-17 01:04:00 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:04:03.811145 | orchestrator | 2026-03-17 01:04:03 | INFO  | Task e3e68071-ee69-40e5-85b5-80cdaf3bb767 is in state STARTED 2026-03-17 01:04:03.813048 | orchestrator | 2026-03-17 01:04:03 | INFO  | Task d5ed5bbb-fc05-4b8b-9e99-722ab86920c6 is in state STARTED 2026-03-17 01:04:03.813951 | orchestrator | 2026-03-17 01:04:03 | INFO  | Task cd2f54ad-5871-4f33-bbc5-9ac126987aec is in state STARTED 2026-03-17 01:04:03.814808 | orchestrator | 2026-03-17 01:04:03 | INFO  | Task c1f2eaee-b1db-4d9b-a3f1-615c59e641bf is in state STARTED 2026-03-17 01:04:03.815834 | orchestrator | 2026-03-17 01:04:03 | INFO  | Task 82e88e0c-18a4-44f4-bb0d-dbde0b4f01b0 is in state STARTED 2026-03-17 01:04:03.815886 | orchestrator | 2026-03-17 01:04:03 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:04:06.838697 | orchestrator | 2026-03-17 01:04:06 | INFO  | Task e3e68071-ee69-40e5-85b5-80cdaf3bb767 is in state STARTED 2026-03-17 01:04:06.839448 | orchestrator | 2026-03-17 01:04:06 | INFO  | Task d5ed5bbb-fc05-4b8b-9e99-722ab86920c6 is in state STARTED 2026-03-17 01:04:06.840131 | orchestrator | 2026-03-17 01:04:06 | INFO  | Task cd2f54ad-5871-4f33-bbc5-9ac126987aec is in state STARTED 2026-03-17 01:04:06.840935 | orchestrator | 2026-03-17 01:04:06 | INFO  | Task c1f2eaee-b1db-4d9b-a3f1-615c59e641bf is in state STARTED 2026-03-17 01:04:06.841816 | orchestrator | 2026-03-17 01:04:06 | INFO  | Task 82e88e0c-18a4-44f4-bb0d-dbde0b4f01b0 is in state STARTED 2026-03-17 01:04:06.841847 | orchestrator | 2026-03-17 01:04:06 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:04:09.865797 | orchestrator | 2026-03-17 01:04:09 | INFO  | Task e3e68071-ee69-40e5-85b5-80cdaf3bb767 is in state STARTED 2026-03-17 01:04:09.865882 | orchestrator | 2026-03-17 01:04:09 | INFO  | Task d5ed5bbb-fc05-4b8b-9e99-722ab86920c6 is in state STARTED 2026-03-17 01:04:09.866444 | orchestrator | 2026-03-17 01:04:09 | INFO  | Task cd2f54ad-5871-4f33-bbc5-9ac126987aec is in state STARTED 2026-03-17 01:04:09.867021 | orchestrator | 2026-03-17 01:04:09 | INFO  | Task c1f2eaee-b1db-4d9b-a3f1-615c59e641bf is in state STARTED 2026-03-17 01:04:09.867888 | orchestrator | 2026-03-17 01:04:09 | INFO  | Task 82e88e0c-18a4-44f4-bb0d-dbde0b4f01b0 is in state STARTED 2026-03-17 01:04:09.867931 | orchestrator | 2026-03-17 01:04:09 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:04:12.900736 | orchestrator | 2026-03-17 01:04:12 | INFO  | Task e3e68071-ee69-40e5-85b5-80cdaf3bb767 is in state STARTED 2026-03-17 01:04:12.901253 | orchestrator | 2026-03-17 01:04:12 | INFO  | Task d5ed5bbb-fc05-4b8b-9e99-722ab86920c6 is in state STARTED 2026-03-17 01:04:12.904134 | orchestrator | 2026-03-17 01:04:12 | INFO  | Task cd2f54ad-5871-4f33-bbc5-9ac126987aec is in state STARTED 2026-03-17 01:04:12.905532 | orchestrator | 2026-03-17 01:04:12 | INFO  | Task c1f2eaee-b1db-4d9b-a3f1-615c59e641bf is in state STARTED 2026-03-17 01:04:12.906076 | orchestrator | 2026-03-17 01:04:12 | INFO  | Task 82e88e0c-18a4-44f4-bb0d-dbde0b4f01b0 is in state STARTED 2026-03-17 01:04:12.906293 | orchestrator | 2026-03-17 01:04:12 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:04:15.929724 | orchestrator | 2026-03-17 01:04:15 | INFO  | Task e3e68071-ee69-40e5-85b5-80cdaf3bb767 is in state STARTED 2026-03-17 01:04:15.930435 | orchestrator | 2026-03-17 01:04:15 | INFO  | Task d5ed5bbb-fc05-4b8b-9e99-722ab86920c6 is in state STARTED 2026-03-17 01:04:15.931406 | orchestrator | 2026-03-17 01:04:15 | INFO  | Task cd2f54ad-5871-4f33-bbc5-9ac126987aec is in state STARTED 2026-03-17 01:04:15.932247 | orchestrator | 2026-03-17 01:04:15 | INFO  | Task c1f2eaee-b1db-4d9b-a3f1-615c59e641bf is in state STARTED 2026-03-17 01:04:15.932944 | orchestrator | 2026-03-17 01:04:15 | INFO  | Task 82e88e0c-18a4-44f4-bb0d-dbde0b4f01b0 is in state STARTED 2026-03-17 01:04:15.933014 | orchestrator | 2026-03-17 01:04:15 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:04:18.964850 | orchestrator | 2026-03-17 01:04:18 | INFO  | Task e3e68071-ee69-40e5-85b5-80cdaf3bb767 is in state STARTED 2026-03-17 01:04:18.966702 | orchestrator | 2026-03-17 01:04:18 | INFO  | Task d5ed5bbb-fc05-4b8b-9e99-722ab86920c6 is in state STARTED 2026-03-17 01:04:18.967677 | orchestrator | 2026-03-17 01:04:18 | INFO  | Task cd2f54ad-5871-4f33-bbc5-9ac126987aec is in state STARTED 2026-03-17 01:04:18.968994 | orchestrator | 2026-03-17 01:04:18 | INFO  | Task c1f2eaee-b1db-4d9b-a3f1-615c59e641bf is in state STARTED 2026-03-17 01:04:18.969909 | orchestrator | 2026-03-17 01:04:18 | INFO  | Task 82e88e0c-18a4-44f4-bb0d-dbde0b4f01b0 is in state STARTED 2026-03-17 01:04:18.970090 | orchestrator | 2026-03-17 01:04:18 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:04:22.014094 | orchestrator | 2026-03-17 01:04:22 | INFO  | Task e3e68071-ee69-40e5-85b5-80cdaf3bb767 is in state STARTED 2026-03-17 01:04:22.018583 | orchestrator | 2026-03-17 01:04:22 | INFO  | Task d5ed5bbb-fc05-4b8b-9e99-722ab86920c6 is in state STARTED 2026-03-17 01:04:22.018692 | orchestrator | 2026-03-17 01:04:22 | INFO  | Task cd2f54ad-5871-4f33-bbc5-9ac126987aec is in state STARTED 2026-03-17 01:04:22.018707 | orchestrator | 2026-03-17 01:04:22 | INFO  | Task c1f2eaee-b1db-4d9b-a3f1-615c59e641bf is in state STARTED 2026-03-17 01:04:22.018718 | orchestrator | 2026-03-17 01:04:22 | INFO  | Task 82e88e0c-18a4-44f4-bb0d-dbde0b4f01b0 is in state STARTED 2026-03-17 01:04:22.018730 | orchestrator | 2026-03-17 01:04:22 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:04:25.043560 | orchestrator | 2026-03-17 01:04:25 | INFO  | Task e3e68071-ee69-40e5-85b5-80cdaf3bb767 is in state STARTED 2026-03-17 01:04:25.043761 | orchestrator | 2026-03-17 01:04:25 | INFO  | Task d5ed5bbb-fc05-4b8b-9e99-722ab86920c6 is in state STARTED 2026-03-17 01:04:25.044430 | orchestrator | 2026-03-17 01:04:25 | INFO  | Task cd2f54ad-5871-4f33-bbc5-9ac126987aec is in state STARTED 2026-03-17 01:04:25.045093 | orchestrator | 2026-03-17 01:04:25 | INFO  | Task c1f2eaee-b1db-4d9b-a3f1-615c59e641bf is in state STARTED 2026-03-17 01:04:25.045745 | orchestrator | 2026-03-17 01:04:25 | INFO  | Task 82e88e0c-18a4-44f4-bb0d-dbde0b4f01b0 is in state STARTED 2026-03-17 01:04:25.045769 | orchestrator | 2026-03-17 01:04:25 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:04:28.074797 | orchestrator | 2026-03-17 01:04:28 | INFO  | Task e3e68071-ee69-40e5-85b5-80cdaf3bb767 is in state STARTED 2026-03-17 01:04:28.074851 | orchestrator | 2026-03-17 01:04:28 | INFO  | Task d5ed5bbb-fc05-4b8b-9e99-722ab86920c6 is in state STARTED 2026-03-17 01:04:28.074859 | orchestrator | 2026-03-17 01:04:28 | INFO  | Task cd2f54ad-5871-4f33-bbc5-9ac126987aec is in state STARTED 2026-03-17 01:04:28.074864 | orchestrator | 2026-03-17 01:04:28 | INFO  | Task c1f2eaee-b1db-4d9b-a3f1-615c59e641bf is in state STARTED 2026-03-17 01:04:28.074870 | orchestrator | 2026-03-17 01:04:28 | INFO  | Task 82e88e0c-18a4-44f4-bb0d-dbde0b4f01b0 is in state STARTED 2026-03-17 01:04:28.074875 | orchestrator | 2026-03-17 01:04:28 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:04:31.096322 | orchestrator | 2026-03-17 01:04:31 | INFO  | Task e3e68071-ee69-40e5-85b5-80cdaf3bb767 is in state STARTED 2026-03-17 01:04:31.097809 | orchestrator | 2026-03-17 01:04:31 | INFO  | Task d5ed5bbb-fc05-4b8b-9e99-722ab86920c6 is in state STARTED 2026-03-17 01:04:31.099115 | orchestrator | 2026-03-17 01:04:31 | INFO  | Task cd2f54ad-5871-4f33-bbc5-9ac126987aec is in state STARTED 2026-03-17 01:04:31.101392 | orchestrator | 2026-03-17 01:04:31 | INFO  | Task c1f2eaee-b1db-4d9b-a3f1-615c59e641bf is in state STARTED 2026-03-17 01:04:31.102805 | orchestrator | 2026-03-17 01:04:31 | INFO  | Task 82e88e0c-18a4-44f4-bb0d-dbde0b4f01b0 is in state STARTED 2026-03-17 01:04:31.102832 | orchestrator | 2026-03-17 01:04:31 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:04:34.129673 | orchestrator | 2026-03-17 01:04:34 | INFO  | Task e3e68071-ee69-40e5-85b5-80cdaf3bb767 is in state STARTED 2026-03-17 01:04:34.129977 | orchestrator | 2026-03-17 01:04:34 | INFO  | Task d5ed5bbb-fc05-4b8b-9e99-722ab86920c6 is in state STARTED 2026-03-17 01:04:34.130452 | orchestrator | 2026-03-17 01:04:34 | INFO  | Task cd2f54ad-5871-4f33-bbc5-9ac126987aec is in state SUCCESS 2026-03-17 01:04:34.130842 | orchestrator | 2026-03-17 01:04:34.130869 | orchestrator | 2026-03-17 01:04:34.130875 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-03-17 01:04:34.130879 | orchestrator | 2026-03-17 01:04:34.130883 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-03-17 01:04:34.130888 | orchestrator | Tuesday 17 March 2026 01:01:58 +0000 (0:00:00.340) 0:00:00.340 ********* 2026-03-17 01:04:34.130892 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-03-17 01:04:34.130896 | orchestrator | 2026-03-17 01:04:34.130900 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-03-17 01:04:34.130904 | orchestrator | Tuesday 17 March 2026 01:01:58 +0000 (0:00:00.210) 0:00:00.551 ********* 2026-03-17 01:04:34.130908 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-03-17 01:04:34.130912 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-03-17 01:04:34.130916 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-03-17 01:04:34.130920 | orchestrator | 2026-03-17 01:04:34.130931 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-03-17 01:04:34.130939 | orchestrator | Tuesday 17 March 2026 01:01:59 +0000 (0:00:01.584) 0:00:02.136 ********* 2026-03-17 01:04:34.130943 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-03-17 01:04:34.130947 | orchestrator | 2026-03-17 01:04:34.130950 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-03-17 01:04:34.130962 | orchestrator | Tuesday 17 March 2026 01:02:01 +0000 (0:00:01.126) 0:00:03.262 ********* 2026-03-17 01:04:34.130966 | orchestrator | changed: [testbed-manager] 2026-03-17 01:04:34.130970 | orchestrator | 2026-03-17 01:04:34.130974 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-03-17 01:04:34.130978 | orchestrator | Tuesday 17 March 2026 01:02:01 +0000 (0:00:00.879) 0:00:04.141 ********* 2026-03-17 01:04:34.130982 | orchestrator | changed: [testbed-manager] 2026-03-17 01:04:34.130985 | orchestrator | 2026-03-17 01:04:34.130989 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-03-17 01:04:34.130993 | orchestrator | Tuesday 17 March 2026 01:02:02 +0000 (0:00:00.853) 0:00:04.995 ********* 2026-03-17 01:04:34.130997 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-03-17 01:04:34.131000 | orchestrator | ok: [testbed-manager] 2026-03-17 01:04:34.131004 | orchestrator | 2026-03-17 01:04:34.131008 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-03-17 01:04:34.131012 | orchestrator | Tuesday 17 March 2026 01:02:43 +0000 (0:00:40.183) 0:00:45.179 ********* 2026-03-17 01:04:34.131016 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-03-17 01:04:34.131020 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-03-17 01:04:34.131023 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-03-17 01:04:34.131027 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-03-17 01:04:34.131031 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-03-17 01:04:34.131035 | orchestrator | 2026-03-17 01:04:34.131038 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-03-17 01:04:34.131042 | orchestrator | Tuesday 17 March 2026 01:02:46 +0000 (0:00:03.844) 0:00:49.024 ********* 2026-03-17 01:04:34.131046 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-03-17 01:04:34.131050 | orchestrator | 2026-03-17 01:04:34.131053 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-03-17 01:04:34.131057 | orchestrator | Tuesday 17 March 2026 01:02:47 +0000 (0:00:00.609) 0:00:49.634 ********* 2026-03-17 01:04:34.131069 | orchestrator | skipping: [testbed-manager] 2026-03-17 01:04:34.131073 | orchestrator | 2026-03-17 01:04:34.131076 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-03-17 01:04:34.131080 | orchestrator | Tuesday 17 March 2026 01:02:47 +0000 (0:00:00.138) 0:00:49.773 ********* 2026-03-17 01:04:34.131084 | orchestrator | skipping: [testbed-manager] 2026-03-17 01:04:34.131087 | orchestrator | 2026-03-17 01:04:34.131091 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-03-17 01:04:34.131095 | orchestrator | Tuesday 17 March 2026 01:02:47 +0000 (0:00:00.317) 0:00:50.090 ********* 2026-03-17 01:04:34.131099 | orchestrator | changed: [testbed-manager] 2026-03-17 01:04:34.131102 | orchestrator | 2026-03-17 01:04:34.131106 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-03-17 01:04:34.131110 | orchestrator | Tuesday 17 March 2026 01:02:49 +0000 (0:00:01.475) 0:00:51.565 ********* 2026-03-17 01:04:34.131113 | orchestrator | changed: [testbed-manager] 2026-03-17 01:04:34.131117 | orchestrator | 2026-03-17 01:04:34.131121 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-03-17 01:04:34.131125 | orchestrator | Tuesday 17 March 2026 01:02:50 +0000 (0:00:00.712) 0:00:52.278 ********* 2026-03-17 01:04:34.131128 | orchestrator | changed: [testbed-manager] 2026-03-17 01:04:34.131132 | orchestrator | 2026-03-17 01:04:34.131136 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-03-17 01:04:34.131139 | orchestrator | Tuesday 17 March 2026 01:02:50 +0000 (0:00:00.567) 0:00:52.846 ********* 2026-03-17 01:04:34.131143 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-03-17 01:04:34.131147 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-03-17 01:04:34.131151 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-03-17 01:04:34.131154 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-03-17 01:04:34.131158 | orchestrator | 2026-03-17 01:04:34.131162 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 01:04:34.131166 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 01:04:34.131170 | orchestrator | 2026-03-17 01:04:34.131173 | orchestrator | 2026-03-17 01:04:34.131184 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 01:04:34.131188 | orchestrator | Tuesday 17 March 2026 01:02:52 +0000 (0:00:01.394) 0:00:54.240 ********* 2026-03-17 01:04:34.131192 | orchestrator | =============================================================================== 2026-03-17 01:04:34.131196 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 40.18s 2026-03-17 01:04:34.131199 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 3.84s 2026-03-17 01:04:34.131203 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.58s 2026-03-17 01:04:34.131207 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.48s 2026-03-17 01:04:34.131211 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.39s 2026-03-17 01:04:34.131214 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.13s 2026-03-17 01:04:34.131218 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.88s 2026-03-17 01:04:34.131222 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.85s 2026-03-17 01:04:34.131226 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.71s 2026-03-17 01:04:34.131229 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.61s 2026-03-17 01:04:34.131233 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.57s 2026-03-17 01:04:34.131237 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.32s 2026-03-17 01:04:34.131243 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.21s 2026-03-17 01:04:34.131249 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.14s 2026-03-17 01:04:34.131253 | orchestrator | 2026-03-17 01:04:34.131257 | orchestrator | 2026-03-17 01:04:34.131261 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2026-03-17 01:04:34.131264 | orchestrator | 2026-03-17 01:04:34.131268 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2026-03-17 01:04:34.131272 | orchestrator | Tuesday 17 March 2026 01:02:35 +0000 (0:00:00.097) 0:00:00.097 ********* 2026-03-17 01:04:34.131276 | orchestrator | changed: [localhost] 2026-03-17 01:04:34.131280 | orchestrator | 2026-03-17 01:04:34.131283 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2026-03-17 01:04:34.131287 | orchestrator | Tuesday 17 March 2026 01:02:36 +0000 (0:00:00.946) 0:00:01.044 ********* 2026-03-17 01:04:34.131291 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent initramfs (3 retries left). 2026-03-17 01:04:34.131295 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent initramfs (2 retries left). 2026-03-17 01:04:34.131298 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent initramfs (1 retries left). 2026-03-17 01:04:34.131303 | orchestrator | fatal: [localhost]: FAILED! => {"attempts": 3, "changed": false, "dest": "/share/ironic/ironic/ironic-agent.initramfs", "elapsed": 10, "msg": "Request failed: ", "url": "https://tarballs.opendev.org/openstack/ironic-python-agent/dib/files/ipa-centos9-stable-2024.2.initramfs"} 2026-03-17 01:04:34.131308 | orchestrator | 2026-03-17 01:04:34.131312 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 01:04:34.131316 | orchestrator | localhost : ok=1  changed=1  unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2026-03-17 01:04:34.131319 | orchestrator | 2026-03-17 01:04:34.131323 | orchestrator | 2026-03-17 01:04:34.131327 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 01:04:34.131331 | orchestrator | Tuesday 17 March 2026 01:03:53 +0000 (0:01:17.518) 0:01:18.562 ********* 2026-03-17 01:04:34.131334 | orchestrator | =============================================================================== 2026-03-17 01:04:34.131338 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 77.52s 2026-03-17 01:04:34.131342 | orchestrator | Ensure the destination directory exists --------------------------------- 0.95s 2026-03-17 01:04:34.131346 | orchestrator | 2026-03-17 01:04:34.131349 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-17 01:04:34.131353 | orchestrator | 2.16.14 2026-03-17 01:04:34.131357 | orchestrator | 2026-03-17 01:04:34.131361 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-03-17 01:04:34.131365 | orchestrator | 2026-03-17 01:04:34.131369 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-03-17 01:04:34.131373 | orchestrator | Tuesday 17 March 2026 01:02:56 +0000 (0:00:00.216) 0:00:00.216 ********* 2026-03-17 01:04:34.131377 | orchestrator | changed: [testbed-manager] 2026-03-17 01:04:34.131380 | orchestrator | 2026-03-17 01:04:34.131384 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-03-17 01:04:34.131388 | orchestrator | Tuesday 17 March 2026 01:02:58 +0000 (0:00:02.342) 0:00:02.559 ********* 2026-03-17 01:04:34.131392 | orchestrator | changed: [testbed-manager] 2026-03-17 01:04:34.131396 | orchestrator | 2026-03-17 01:04:34.131399 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-03-17 01:04:34.131403 | orchestrator | Tuesday 17 March 2026 01:02:59 +0000 (0:00:00.922) 0:00:03.481 ********* 2026-03-17 01:04:34.131407 | orchestrator | changed: [testbed-manager] 2026-03-17 01:04:34.131411 | orchestrator | 2026-03-17 01:04:34.131414 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-03-17 01:04:34.131418 | orchestrator | Tuesday 17 March 2026 01:03:00 +0000 (0:00:00.927) 0:00:04.408 ********* 2026-03-17 01:04:34.131422 | orchestrator | changed: [testbed-manager] 2026-03-17 01:04:34.131428 | orchestrator | 2026-03-17 01:04:34.131432 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-03-17 01:04:34.131439 | orchestrator | Tuesday 17 March 2026 01:03:01 +0000 (0:00:00.982) 0:00:05.391 ********* 2026-03-17 01:04:34.131444 | orchestrator | changed: [testbed-manager] 2026-03-17 01:04:34.131451 | orchestrator | 2026-03-17 01:04:34.131457 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-03-17 01:04:34.131463 | orchestrator | Tuesday 17 March 2026 01:03:02 +0000 (0:00:00.913) 0:00:06.304 ********* 2026-03-17 01:04:34.131473 | orchestrator | changed: [testbed-manager] 2026-03-17 01:04:34.131481 | orchestrator | 2026-03-17 01:04:34.131487 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-03-17 01:04:34.131493 | orchestrator | Tuesday 17 March 2026 01:03:03 +0000 (0:00:00.954) 0:00:07.258 ********* 2026-03-17 01:04:34.131569 | orchestrator | changed: [testbed-manager] 2026-03-17 01:04:34.131580 | orchestrator | 2026-03-17 01:04:34.131587 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-03-17 01:04:34.131593 | orchestrator | Tuesday 17 March 2026 01:03:04 +0000 (0:00:01.139) 0:00:08.398 ********* 2026-03-17 01:04:34.131612 | orchestrator | changed: [testbed-manager] 2026-03-17 01:04:34.131620 | orchestrator | 2026-03-17 01:04:34.131627 | orchestrator | TASK [Create admin user] ******************************************************* 2026-03-17 01:04:34.131633 | orchestrator | Tuesday 17 March 2026 01:03:05 +0000 (0:00:01.048) 0:00:09.447 ********* 2026-03-17 01:04:34.131639 | orchestrator | changed: [testbed-manager] 2026-03-17 01:04:34.131646 | orchestrator | 2026-03-17 01:04:34.131655 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-03-17 01:04:34.131661 | orchestrator | Tuesday 17 March 2026 01:04:08 +0000 (0:01:03.382) 0:01:12.829 ********* 2026-03-17 01:04:34.131667 | orchestrator | skipping: [testbed-manager] 2026-03-17 01:04:34.131674 | orchestrator | 2026-03-17 01:04:34.131684 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-17 01:04:34.131690 | orchestrator | 2026-03-17 01:04:34.131695 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-17 01:04:34.131701 | orchestrator | Tuesday 17 March 2026 01:04:09 +0000 (0:00:00.124) 0:01:12.953 ********* 2026-03-17 01:04:34.131708 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:04:34.131713 | orchestrator | 2026-03-17 01:04:34.131719 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-17 01:04:34.131725 | orchestrator | 2026-03-17 01:04:34.131731 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-17 01:04:34.131737 | orchestrator | Tuesday 17 March 2026 01:04:20 +0000 (0:00:11.857) 0:01:24.811 ********* 2026-03-17 01:04:34.131743 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:04:34.131752 | orchestrator | 2026-03-17 01:04:34.131761 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-17 01:04:34.131768 | orchestrator | 2026-03-17 01:04:34.131774 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-17 01:04:34.131780 | orchestrator | Tuesday 17 March 2026 01:04:32 +0000 (0:00:11.643) 0:01:36.454 ********* 2026-03-17 01:04:34.131787 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:04:34.131793 | orchestrator | 2026-03-17 01:04:34.131798 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 01:04:34.131805 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-17 01:04:34.131812 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 01:04:34.131819 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 01:04:34.131825 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 01:04:34.131839 | orchestrator | 2026-03-17 01:04:34.131843 | orchestrator | 2026-03-17 01:04:34.131847 | orchestrator | 2026-03-17 01:04:34.131851 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 01:04:34.131855 | orchestrator | Tuesday 17 March 2026 01:04:33 +0000 (0:00:01.256) 0:01:37.711 ********* 2026-03-17 01:04:34.131858 | orchestrator | =============================================================================== 2026-03-17 01:04:34.131862 | orchestrator | Create admin user ------------------------------------------------------ 63.38s 2026-03-17 01:04:34.131866 | orchestrator | Restart ceph manager service ------------------------------------------- 24.76s 2026-03-17 01:04:34.131870 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 2.34s 2026-03-17 01:04:34.131873 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.14s 2026-03-17 01:04:34.131877 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.05s 2026-03-17 01:04:34.131881 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 0.98s 2026-03-17 01:04:34.131885 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 0.95s 2026-03-17 01:04:34.131888 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 0.93s 2026-03-17 01:04:34.131892 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 0.92s 2026-03-17 01:04:34.131896 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 0.91s 2026-03-17 01:04:34.131899 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.12s 2026-03-17 01:04:34.131908 | orchestrator | 2026-03-17 01:04:34 | INFO  | Task c1f2eaee-b1db-4d9b-a3f1-615c59e641bf is in state STARTED 2026-03-17 01:04:34.132492 | orchestrator | 2026-03-17 01:04:34 | INFO  | Task 82e88e0c-18a4-44f4-bb0d-dbde0b4f01b0 is in state STARTED 2026-03-17 01:04:34.132523 | orchestrator | 2026-03-17 01:04:34 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:04:37.162269 | orchestrator | 2026-03-17 01:04:37 | INFO  | Task e3e68071-ee69-40e5-85b5-80cdaf3bb767 is in state STARTED 2026-03-17 01:04:37.163623 | orchestrator | 2026-03-17 01:04:37 | INFO  | Task d5ed5bbb-fc05-4b8b-9e99-722ab86920c6 is in state STARTED 2026-03-17 01:04:37.165388 | orchestrator | 2026-03-17 01:04:37 | INFO  | Task c1f2eaee-b1db-4d9b-a3f1-615c59e641bf is in state STARTED 2026-03-17 01:04:37.166060 | orchestrator | 2026-03-17 01:04:37 | INFO  | Task 82e88e0c-18a4-44f4-bb0d-dbde0b4f01b0 is in state STARTED 2026-03-17 01:04:37.166079 | orchestrator | 2026-03-17 01:04:37 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:04:40.207463 | orchestrator | 2026-03-17 01:04:40 | INFO  | Task e3e68071-ee69-40e5-85b5-80cdaf3bb767 is in state STARTED 2026-03-17 01:04:40.208113 | orchestrator | 2026-03-17 01:04:40 | INFO  | Task d5ed5bbb-fc05-4b8b-9e99-722ab86920c6 is in state SUCCESS 2026-03-17 01:04:40.209300 | orchestrator | 2026-03-17 01:04:40.209350 | orchestrator | 2026-03-17 01:04:40.209359 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-17 01:04:40.209368 | orchestrator | 2026-03-17 01:04:40.209375 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-17 01:04:40.209383 | orchestrator | Tuesday 17 March 2026 01:02:35 +0000 (0:00:00.291) 0:00:00.291 ********* 2026-03-17 01:04:40.209390 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:04:40.209398 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:04:40.209405 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:04:40.209412 | orchestrator | 2026-03-17 01:04:40.209432 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-17 01:04:40.209440 | orchestrator | Tuesday 17 March 2026 01:02:36 +0000 (0:00:00.284) 0:00:00.575 ********* 2026-03-17 01:04:40.209448 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-03-17 01:04:40.209476 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-03-17 01:04:40.209483 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-03-17 01:04:40.209490 | orchestrator | 2026-03-17 01:04:40.209498 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-03-17 01:04:40.209505 | orchestrator | 2026-03-17 01:04:40.209512 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-17 01:04:40.209519 | orchestrator | Tuesday 17 March 2026 01:02:36 +0000 (0:00:00.275) 0:00:00.850 ********* 2026-03-17 01:04:40.209526 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:04:40.209534 | orchestrator | 2026-03-17 01:04:40.209541 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2026-03-17 01:04:40.209548 | orchestrator | Tuesday 17 March 2026 01:02:36 +0000 (0:00:00.541) 0:00:01.392 ********* 2026-03-17 01:04:40.209556 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2026-03-17 01:04:40.209563 | orchestrator | 2026-03-17 01:04:40.209570 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2026-03-17 01:04:40.209933 | orchestrator | Tuesday 17 March 2026 01:02:40 +0000 (0:00:03.918) 0:00:05.311 ********* 2026-03-17 01:04:40.209941 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2026-03-17 01:04:40.209949 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2026-03-17 01:04:40.209956 | orchestrator | 2026-03-17 01:04:40.209964 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2026-03-17 01:04:40.209971 | orchestrator | Tuesday 17 March 2026 01:02:48 +0000 (0:00:07.770) 0:00:13.081 ********* 2026-03-17 01:04:40.209978 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-17 01:04:40.209986 | orchestrator | 2026-03-17 01:04:40.209993 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2026-03-17 01:04:40.210000 | orchestrator | Tuesday 17 March 2026 01:02:52 +0000 (0:00:03.533) 0:00:16.615 ********* 2026-03-17 01:04:40.210007 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2026-03-17 01:04:40.210053 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-17 01:04:40.210061 | orchestrator | 2026-03-17 01:04:40.210089 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2026-03-17 01:04:40.210098 | orchestrator | Tuesday 17 March 2026 01:02:56 +0000 (0:00:04.057) 0:00:20.673 ********* 2026-03-17 01:04:40.210106 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-17 01:04:40.210114 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2026-03-17 01:04:40.210121 | orchestrator | changed: [testbed-node-0] => (item=creator) 2026-03-17 01:04:40.210129 | orchestrator | changed: [testbed-node-0] => (item=observer) 2026-03-17 01:04:40.210136 | orchestrator | changed: [testbed-node-0] => (item=audit) 2026-03-17 01:04:40.210144 | orchestrator | 2026-03-17 01:04:40.210151 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2026-03-17 01:04:40.210159 | orchestrator | Tuesday 17 March 2026 01:03:13 +0000 (0:00:17.517) 0:00:38.190 ********* 2026-03-17 01:04:40.210166 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2026-03-17 01:04:40.210174 | orchestrator | 2026-03-17 01:04:40.210181 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-03-17 01:04:40.210189 | orchestrator | Tuesday 17 March 2026 01:03:17 +0000 (0:00:03.812) 0:00:42.003 ********* 2026-03-17 01:04:40.210199 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-17 01:04:40.210243 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:40.210254 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-17 01:04:40.210262 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-17 01:04:40.210271 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:40.210279 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:40.210298 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:40.210309 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:40.210319 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:40.210327 | orchestrator | 2026-03-17 01:04:40.210335 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-03-17 01:04:40.210343 | orchestrator | Tuesday 17 March 2026 01:03:19 +0000 (0:00:02.122) 0:00:44.125 ********* 2026-03-17 01:04:40.210351 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2026-03-17 01:04:40.210359 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2026-03-17 01:04:40.210366 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2026-03-17 01:04:40.210374 | orchestrator | 2026-03-17 01:04:40.210382 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-03-17 01:04:40.210390 | orchestrator | Tuesday 17 March 2026 01:03:20 +0000 (0:00:01.034) 0:00:45.159 ********* 2026-03-17 01:04:40.210397 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:04:40.210404 | orchestrator | 2026-03-17 01:04:40.210412 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-03-17 01:04:40.210420 | orchestrator | Tuesday 17 March 2026 01:03:20 +0000 (0:00:00.151) 0:00:45.311 ********* 2026-03-17 01:04:40.210427 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:04:40.210435 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:04:40.210443 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:04:40.210450 | orchestrator | 2026-03-17 01:04:40.210457 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-17 01:04:40.210470 | orchestrator | Tuesday 17 March 2026 01:03:21 +0000 (0:00:00.283) 0:00:45.594 ********* 2026-03-17 01:04:40.210480 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:04:40.210488 | orchestrator | 2026-03-17 01:04:40.210496 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-03-17 01:04:40.210504 | orchestrator | Tuesday 17 March 2026 01:03:21 +0000 (0:00:00.596) 0:00:46.190 ********* 2026-03-17 01:04:40.210516 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-17 01:04:40.210532 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-17 01:04:40.210541 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-17 01:04:40.210550 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:40.210559 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:40.210573 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:40.210583 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:40.210610 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:40.210622 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:40.210630 | orchestrator | 2026-03-17 01:04:40.210637 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-03-17 01:04:40.210644 | orchestrator | Tuesday 17 March 2026 01:03:25 +0000 (0:00:03.562) 0:00:49.753 ********* 2026-03-17 01:04:40.210652 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-17 01:04:40.210661 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-17 01:04:40.210677 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:04:40.210686 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:04:40.210699 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-17 01:04:40.210712 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-17 01:04:40.210722 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:04:40.210731 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:04:40.210740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-17 01:04:40.210755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-17 01:04:40.210764 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:04:40.210773 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:04:40.210781 | orchestrator | 2026-03-17 01:04:40.210789 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-03-17 01:04:40.210796 | orchestrator | Tuesday 17 March 2026 01:03:25 +0000 (0:00:00.664) 0:00:50.418 ********* 2026-03-17 01:04:40.210812 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-17 01:04:40.210820 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-17 01:04:40.210828 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:04:40.210842 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:04:40.210850 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-17 01:04:40.210857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-17 01:04:40.210864 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:04:40.210871 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:04:40.210886 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-17 01:04:40.210894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-17 01:04:40.210905 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:04:40.210911 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:04:40.210916 | orchestrator | 2026-03-17 01:04:40.210922 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-03-17 01:04:40.210929 | orchestrator | Tuesday 17 March 2026 01:03:26 +0000 (0:00:00.557) 0:00:50.975 ********* 2026-03-17 01:04:40.210936 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-17 01:04:40.210947 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-17 01:04:40.210957 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-17 01:04:40.210963 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:40.210974 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:40.210981 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:40.210987 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:40.210997 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:40.211006 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:40.211013 | orchestrator | 2026-03-17 01:04:40.211019 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-03-17 01:04:40.211025 | orchestrator | Tuesday 17 March 2026 01:03:30 +0000 (0:00:03.599) 0:00:54.575 ********* 2026-03-17 01:04:40.211031 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:04:40.211037 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:04:40.211043 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:04:40.211054 | orchestrator | 2026-03-17 01:04:40.211061 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-03-17 01:04:40.211068 | orchestrator | Tuesday 17 March 2026 01:03:32 +0000 (0:00:02.133) 0:00:56.708 ********* 2026-03-17 01:04:40.211074 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-17 01:04:40.211080 | orchestrator | 2026-03-17 01:04:40.211086 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-03-17 01:04:40.211092 | orchestrator | Tuesday 17 March 2026 01:03:34 +0000 (0:00:02.078) 0:00:58.786 ********* 2026-03-17 01:04:40.211098 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:04:40.211104 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:04:40.211110 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:04:40.211116 | orchestrator | 2026-03-17 01:04:40.211121 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-03-17 01:04:40.211127 | orchestrator | Tuesday 17 March 2026 01:03:34 +0000 (0:00:00.677) 0:00:59.464 ********* 2026-03-17 01:04:40.211134 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-17 01:04:40.211141 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-17 01:04:40.211153 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-17 01:04:40.211162 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:40.211172 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:40.211180 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:40.211187 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:40.211193 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:40.211200 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:40.211207 | orchestrator | 2026-03-17 01:04:40.211214 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-03-17 01:04:40.211221 | orchestrator | Tuesday 17 March 2026 01:03:44 +0000 (0:00:09.762) 0:01:09.227 ********* 2026-03-17 01:04:40.211231 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-17 01:04:40.211243 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-17 01:04:40.211250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:04:40.211257 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:04:40.211263 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-17 01:04:40.211296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-17 01:04:40.211310 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:04:40.211323 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:04:40.211332 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-17 01:04:40.211340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-17 01:04:40.211348 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:04:40.211356 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:04:40.211363 | orchestrator | 2026-03-17 01:04:40.211369 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2026-03-17 01:04:40.211376 | orchestrator | Tuesday 17 March 2026 01:03:46 +0000 (0:00:01.425) 0:01:10.652 ********* 2026-03-17 01:04:40.211383 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-17 01:04:40.211399 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-17 01:04:40.211412 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-17 01:04:40.211419 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:40.211426 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:40.211433 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:40.211441 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:40.211462 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:40.211469 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:40.211477 | orchestrator | 2026-03-17 01:04:40.211484 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-17 01:04:40.211491 | orchestrator | Tuesday 17 March 2026 01:03:49 +0000 (0:00:03.100) 0:01:13.752 ********* 2026-03-17 01:04:40.211499 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:04:40.211506 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:04:40.211515 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:04:40.211524 | orchestrator | 2026-03-17 01:04:40.211533 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2026-03-17 01:04:40.211541 | orchestrator | Tuesday 17 March 2026 01:03:49 +0000 (0:00:00.699) 0:01:14.452 ********* 2026-03-17 01:04:40.211549 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:04:40.211556 | orchestrator | 2026-03-17 01:04:40.211565 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2026-03-17 01:04:40.211573 | orchestrator | Tuesday 17 March 2026 01:03:51 +0000 (0:00:02.069) 0:01:16.522 ********* 2026-03-17 01:04:40.211580 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:04:40.211587 | orchestrator | 2026-03-17 01:04:40.211608 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-03-17 01:04:40.211615 | orchestrator | Tuesday 17 March 2026 01:03:54 +0000 (0:00:02.444) 0:01:18.966 ********* 2026-03-17 01:04:40.211622 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:04:40.211627 | orchestrator | 2026-03-17 01:04:40.211633 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-17 01:04:40.211639 | orchestrator | Tuesday 17 March 2026 01:04:06 +0000 (0:00:12.532) 0:01:31.499 ********* 2026-03-17 01:04:40.211645 | orchestrator | 2026-03-17 01:04:40.211652 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-17 01:04:40.211658 | orchestrator | Tuesday 17 March 2026 01:04:07 +0000 (0:00:00.231) 0:01:31.731 ********* 2026-03-17 01:04:40.211664 | orchestrator | 2026-03-17 01:04:40.211670 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-17 01:04:40.211677 | orchestrator | Tuesday 17 March 2026 01:04:07 +0000 (0:00:00.058) 0:01:31.789 ********* 2026-03-17 01:04:40.211683 | orchestrator | 2026-03-17 01:04:40.211690 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-03-17 01:04:40.211696 | orchestrator | Tuesday 17 March 2026 01:04:07 +0000 (0:00:00.059) 0:01:31.849 ********* 2026-03-17 01:04:40.211702 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:04:40.211708 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:04:40.211715 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:04:40.211727 | orchestrator | 2026-03-17 01:04:40.211732 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-03-17 01:04:40.211738 | orchestrator | Tuesday 17 March 2026 01:04:16 +0000 (0:00:09.436) 0:01:41.285 ********* 2026-03-17 01:04:40.211744 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:04:40.211750 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:04:40.211756 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:04:40.211763 | orchestrator | 2026-03-17 01:04:40.211770 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-03-17 01:04:40.211777 | orchestrator | Tuesday 17 March 2026 01:04:26 +0000 (0:00:09.723) 0:01:51.009 ********* 2026-03-17 01:04:40.211783 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:04:40.211789 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:04:40.211795 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:04:40.211802 | orchestrator | 2026-03-17 01:04:40.211809 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 01:04:40.211816 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-17 01:04:40.211824 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-17 01:04:40.211830 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-17 01:04:40.211837 | orchestrator | 2026-03-17 01:04:40.211844 | orchestrator | 2026-03-17 01:04:40.211850 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 01:04:40.211857 | orchestrator | Tuesday 17 March 2026 01:04:37 +0000 (0:00:10.843) 0:02:01.853 ********* 2026-03-17 01:04:40.211864 | orchestrator | =============================================================================== 2026-03-17 01:04:40.211870 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 17.52s 2026-03-17 01:04:40.211884 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 12.53s 2026-03-17 01:04:40.211891 | orchestrator | barbican : Restart barbican-worker container --------------------------- 10.84s 2026-03-17 01:04:40.211897 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 9.76s 2026-03-17 01:04:40.211904 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 9.72s 2026-03-17 01:04:40.211915 | orchestrator | barbican : Restart barbican-api container ------------------------------- 9.44s 2026-03-17 01:04:40.211921 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 7.77s 2026-03-17 01:04:40.211928 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.06s 2026-03-17 01:04:40.211934 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.92s 2026-03-17 01:04:40.211941 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 3.81s 2026-03-17 01:04:40.211949 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.60s 2026-03-17 01:04:40.211956 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.56s 2026-03-17 01:04:40.211962 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.53s 2026-03-17 01:04:40.211969 | orchestrator | barbican : Check barbican containers ------------------------------------ 3.10s 2026-03-17 01:04:40.211976 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.44s 2026-03-17 01:04:40.211982 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.13s 2026-03-17 01:04:40.211989 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.12s 2026-03-17 01:04:40.211996 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 2.08s 2026-03-17 01:04:40.212003 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.07s 2026-03-17 01:04:40.212010 | orchestrator | barbican : Copying over existing policy file ---------------------------- 1.43s 2026-03-17 01:04:40.212024 | orchestrator | 2026-03-17 01:04:40 | INFO  | Task c1f2eaee-b1db-4d9b-a3f1-615c59e641bf is in state STARTED 2026-03-17 01:04:40.212033 | orchestrator | 2026-03-17 01:04:40 | INFO  | Task 87e1a399-d6c0-4a2a-b3e6-82a031e1c630 is in state STARTED 2026-03-17 01:04:40.212040 | orchestrator | 2026-03-17 01:04:40 | INFO  | Task 82e88e0c-18a4-44f4-bb0d-dbde0b4f01b0 is in state STARTED 2026-03-17 01:04:40.212047 | orchestrator | 2026-03-17 01:04:40 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:04:43.234226 | orchestrator | 2026-03-17 01:04:43 | INFO  | Task e3e68071-ee69-40e5-85b5-80cdaf3bb767 is in state STARTED 2026-03-17 01:04:43.234669 | orchestrator | 2026-03-17 01:04:43 | INFO  | Task c1f2eaee-b1db-4d9b-a3f1-615c59e641bf is in state STARTED 2026-03-17 01:04:43.235266 | orchestrator | 2026-03-17 01:04:43 | INFO  | Task 87e1a399-d6c0-4a2a-b3e6-82a031e1c630 is in state STARTED 2026-03-17 01:04:43.237813 | orchestrator | 2026-03-17 01:04:43 | INFO  | Task 82e88e0c-18a4-44f4-bb0d-dbde0b4f01b0 is in state STARTED 2026-03-17 01:04:43.237858 | orchestrator | 2026-03-17 01:04:43 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:04:46.260153 | orchestrator | 2026-03-17 01:04:46 | INFO  | Task e3e68071-ee69-40e5-85b5-80cdaf3bb767 is in state STARTED 2026-03-17 01:04:46.260838 | orchestrator | 2026-03-17 01:04:46 | INFO  | Task c1f2eaee-b1db-4d9b-a3f1-615c59e641bf is in state STARTED 2026-03-17 01:04:46.261847 | orchestrator | 2026-03-17 01:04:46 | INFO  | Task 87e1a399-d6c0-4a2a-b3e6-82a031e1c630 is in state STARTED 2026-03-17 01:04:46.262742 | orchestrator | 2026-03-17 01:04:46 | INFO  | Task 82e88e0c-18a4-44f4-bb0d-dbde0b4f01b0 is in state STARTED 2026-03-17 01:04:46.262996 | orchestrator | 2026-03-17 01:04:46 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:04:49.313467 | orchestrator | 2026-03-17 01:04:49 | INFO  | Task e3e68071-ee69-40e5-85b5-80cdaf3bb767 is in state STARTED 2026-03-17 01:04:49.316406 | orchestrator | 2026-03-17 01:04:49 | INFO  | Task c1f2eaee-b1db-4d9b-a3f1-615c59e641bf is in state STARTED 2026-03-17 01:04:49.318341 | orchestrator | 2026-03-17 01:04:49 | INFO  | Task 87e1a399-d6c0-4a2a-b3e6-82a031e1c630 is in state STARTED 2026-03-17 01:04:49.320022 | orchestrator | 2026-03-17 01:04:49 | INFO  | Task 82e88e0c-18a4-44f4-bb0d-dbde0b4f01b0 is in state STARTED 2026-03-17 01:04:49.320062 | orchestrator | 2026-03-17 01:04:49 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:04:52.364666 | orchestrator | 2026-03-17 01:04:52 | INFO  | Task e3e68071-ee69-40e5-85b5-80cdaf3bb767 is in state STARTED 2026-03-17 01:04:52.368097 | orchestrator | 2026-03-17 01:04:52 | INFO  | Task c1f2eaee-b1db-4d9b-a3f1-615c59e641bf is in state STARTED 2026-03-17 01:04:52.369552 | orchestrator | 2026-03-17 01:04:52 | INFO  | Task 87e1a399-d6c0-4a2a-b3e6-82a031e1c630 is in state STARTED 2026-03-17 01:04:52.371544 | orchestrator | 2026-03-17 01:04:52 | INFO  | Task 82e88e0c-18a4-44f4-bb0d-dbde0b4f01b0 is in state STARTED 2026-03-17 01:04:52.371675 | orchestrator | 2026-03-17 01:04:52 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:04:55.410208 | orchestrator | 2026-03-17 01:04:55 | INFO  | Task e3e68071-ee69-40e5-85b5-80cdaf3bb767 is in state STARTED 2026-03-17 01:04:55.411355 | orchestrator | 2026-03-17 01:04:55 | INFO  | Task c1f2eaee-b1db-4d9b-a3f1-615c59e641bf is in state STARTED 2026-03-17 01:04:55.412779 | orchestrator | 2026-03-17 01:04:55 | INFO  | Task 87e1a399-d6c0-4a2a-b3e6-82a031e1c630 is in state STARTED 2026-03-17 01:04:55.414214 | orchestrator | 2026-03-17 01:04:55 | INFO  | Task 82e88e0c-18a4-44f4-bb0d-dbde0b4f01b0 is in state STARTED 2026-03-17 01:04:55.414275 | orchestrator | 2026-03-17 01:04:55 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:04:58.459331 | orchestrator | 2026-03-17 01:04:58 | INFO  | Task e3e68071-ee69-40e5-85b5-80cdaf3bb767 is in state STARTED 2026-03-17 01:04:58.460357 | orchestrator | 2026-03-17 01:04:58 | INFO  | Task c1f2eaee-b1db-4d9b-a3f1-615c59e641bf is in state STARTED 2026-03-17 01:04:58.461313 | orchestrator | 2026-03-17 01:04:58 | INFO  | Task 87e1a399-d6c0-4a2a-b3e6-82a031e1c630 is in state STARTED 2026-03-17 01:04:58.462380 | orchestrator | 2026-03-17 01:04:58 | INFO  | Task 82e88e0c-18a4-44f4-bb0d-dbde0b4f01b0 is in state STARTED 2026-03-17 01:04:58.462408 | orchestrator | 2026-03-17 01:04:58 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:05:01.517888 | orchestrator | 2026-03-17 01:05:01 | INFO  | Task e3e68071-ee69-40e5-85b5-80cdaf3bb767 is in state STARTED 2026-03-17 01:05:01.518298 | orchestrator | 2026-03-17 01:05:01 | INFO  | Task c1f2eaee-b1db-4d9b-a3f1-615c59e641bf is in state STARTED 2026-03-17 01:05:01.519118 | orchestrator | 2026-03-17 01:05:01 | INFO  | Task 87e1a399-d6c0-4a2a-b3e6-82a031e1c630 is in state STARTED 2026-03-17 01:05:01.519859 | orchestrator | 2026-03-17 01:05:01 | INFO  | Task 82e88e0c-18a4-44f4-bb0d-dbde0b4f01b0 is in state STARTED 2026-03-17 01:05:01.519890 | orchestrator | 2026-03-17 01:05:01 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:05:04.554763 | orchestrator | 2026-03-17 01:05:04 | INFO  | Task e3e68071-ee69-40e5-85b5-80cdaf3bb767 is in state STARTED 2026-03-17 01:05:04.555533 | orchestrator | 2026-03-17 01:05:04 | INFO  | Task c1f2eaee-b1db-4d9b-a3f1-615c59e641bf is in state STARTED 2026-03-17 01:05:04.556176 | orchestrator | 2026-03-17 01:05:04 | INFO  | Task 87e1a399-d6c0-4a2a-b3e6-82a031e1c630 is in state STARTED 2026-03-17 01:05:04.556955 | orchestrator | 2026-03-17 01:05:04 | INFO  | Task 82e88e0c-18a4-44f4-bb0d-dbde0b4f01b0 is in state STARTED 2026-03-17 01:05:04.556991 | orchestrator | 2026-03-17 01:05:04 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:05:07.598405 | orchestrator | 2026-03-17 01:05:07 | INFO  | Task e3e68071-ee69-40e5-85b5-80cdaf3bb767 is in state STARTED 2026-03-17 01:05:07.600313 | orchestrator | 2026-03-17 01:05:07 | INFO  | Task c1f2eaee-b1db-4d9b-a3f1-615c59e641bf is in state STARTED 2026-03-17 01:05:07.602340 | orchestrator | 2026-03-17 01:05:07 | INFO  | Task 87e1a399-d6c0-4a2a-b3e6-82a031e1c630 is in state STARTED 2026-03-17 01:05:07.604372 | orchestrator | 2026-03-17 01:05:07 | INFO  | Task 82e88e0c-18a4-44f4-bb0d-dbde0b4f01b0 is in state STARTED 2026-03-17 01:05:07.604420 | orchestrator | 2026-03-17 01:05:07 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:05:10.636014 | orchestrator | 2026-03-17 01:05:10 | INFO  | Task e3e68071-ee69-40e5-85b5-80cdaf3bb767 is in state STARTED 2026-03-17 01:05:10.637470 | orchestrator | 2026-03-17 01:05:10 | INFO  | Task c1f2eaee-b1db-4d9b-a3f1-615c59e641bf is in state STARTED 2026-03-17 01:05:10.638799 | orchestrator | 2026-03-17 01:05:10 | INFO  | Task 87e1a399-d6c0-4a2a-b3e6-82a031e1c630 is in state STARTED 2026-03-17 01:05:10.640476 | orchestrator | 2026-03-17 01:05:10 | INFO  | Task 82e88e0c-18a4-44f4-bb0d-dbde0b4f01b0 is in state SUCCESS 2026-03-17 01:05:10.641736 | orchestrator | 2026-03-17 01:05:10.641770 | orchestrator | 2026-03-17 01:05:10.641778 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-17 01:05:10.641785 | orchestrator | 2026-03-17 01:05:10.641791 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-17 01:05:10.641798 | orchestrator | Tuesday 17 March 2026 01:03:58 +0000 (0:00:00.357) 0:00:00.357 ********* 2026-03-17 01:05:10.641805 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:05:10.641828 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:05:10.641835 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:05:10.641841 | orchestrator | 2026-03-17 01:05:10.641848 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-17 01:05:10.641854 | orchestrator | Tuesday 17 March 2026 01:03:58 +0000 (0:00:00.439) 0:00:00.797 ********* 2026-03-17 01:05:10.641860 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-03-17 01:05:10.641867 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-03-17 01:05:10.641874 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-03-17 01:05:10.641880 | orchestrator | 2026-03-17 01:05:10.641886 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-03-17 01:05:10.641892 | orchestrator | 2026-03-17 01:05:10.641898 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-17 01:05:10.641905 | orchestrator | Tuesday 17 March 2026 01:03:59 +0000 (0:00:00.569) 0:00:01.366 ********* 2026-03-17 01:05:10.641919 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:05:10.641926 | orchestrator | 2026-03-17 01:05:10.641932 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2026-03-17 01:05:10.641938 | orchestrator | Tuesday 17 March 2026 01:03:59 +0000 (0:00:00.662) 0:00:02.029 ********* 2026-03-17 01:05:10.641944 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2026-03-17 01:05:10.641951 | orchestrator | 2026-03-17 01:05:10.641968 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2026-03-17 01:05:10.641980 | orchestrator | Tuesday 17 March 2026 01:04:03 +0000 (0:00:03.981) 0:00:06.011 ********* 2026-03-17 01:05:10.641986 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-03-17 01:05:10.641992 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-03-17 01:05:10.641999 | orchestrator | 2026-03-17 01:05:10.642005 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-03-17 01:05:10.642029 | orchestrator | Tuesday 17 March 2026 01:04:11 +0000 (0:00:07.291) 0:00:13.302 ********* 2026-03-17 01:05:10.642051 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-17 01:05:10.642057 | orchestrator | 2026-03-17 01:05:10.642064 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-03-17 01:05:10.642070 | orchestrator | Tuesday 17 March 2026 01:04:14 +0000 (0:00:03.189) 0:00:16.492 ********* 2026-03-17 01:05:10.642119 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-03-17 01:05:10.642126 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-17 01:05:10.642133 | orchestrator | 2026-03-17 01:05:10.642139 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-03-17 01:05:10.642146 | orchestrator | Tuesday 17 March 2026 01:04:18 +0000 (0:00:04.078) 0:00:20.570 ********* 2026-03-17 01:05:10.642153 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-17 01:05:10.642159 | orchestrator | 2026-03-17 01:05:10.642202 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2026-03-17 01:05:10.642209 | orchestrator | Tuesday 17 March 2026 01:04:22 +0000 (0:00:03.749) 0:00:24.320 ********* 2026-03-17 01:05:10.642215 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2026-03-17 01:05:10.642221 | orchestrator | 2026-03-17 01:05:10.642228 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-17 01:05:10.642234 | orchestrator | Tuesday 17 March 2026 01:04:26 +0000 (0:00:03.853) 0:00:28.173 ********* 2026-03-17 01:05:10.642241 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:05:10.642247 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:05:10.642254 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:05:10.642260 | orchestrator | 2026-03-17 01:05:10.642266 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-03-17 01:05:10.642279 | orchestrator | Tuesday 17 March 2026 01:04:26 +0000 (0:00:00.321) 0:00:28.495 ********* 2026-03-17 01:05:10.642288 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-17 01:05:10.642307 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-17 01:05:10.642319 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-17 01:05:10.642325 | orchestrator | 2026-03-17 01:05:10.642332 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-03-17 01:05:10.642338 | orchestrator | Tuesday 17 March 2026 01:04:27 +0000 (0:00:01.483) 0:00:29.979 ********* 2026-03-17 01:05:10.642344 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:05:10.642350 | orchestrator | 2026-03-17 01:05:10.642357 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-03-17 01:05:10.642363 | orchestrator | Tuesday 17 March 2026 01:04:28 +0000 (0:00:00.233) 0:00:30.212 ********* 2026-03-17 01:05:10.642369 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:05:10.642375 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:05:10.642381 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:05:10.642387 | orchestrator | 2026-03-17 01:05:10.642393 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-17 01:05:10.642400 | orchestrator | Tuesday 17 March 2026 01:04:28 +0000 (0:00:00.297) 0:00:30.510 ********* 2026-03-17 01:05:10.642406 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:05:10.642416 | orchestrator | 2026-03-17 01:05:10.642423 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-03-17 01:05:10.642429 | orchestrator | Tuesday 17 March 2026 01:04:29 +0000 (0:00:00.767) 0:00:31.278 ********* 2026-03-17 01:05:10.642436 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-17 01:05:10.642447 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-17 01:05:10.642456 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-17 01:05:10.642463 | orchestrator | 2026-03-17 01:05:10.642469 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-03-17 01:05:10.642475 | orchestrator | Tuesday 17 March 2026 01:04:30 +0000 (0:00:01.441) 0:00:32.720 ********* 2026-03-17 01:05:10.642482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-17 01:05:10.642492 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:05:10.642498 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-17 01:05:10.642505 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:05:10.642515 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-17 01:05:10.642521 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:05:10.642527 | orchestrator | 2026-03-17 01:05:10.642534 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-03-17 01:05:10.642540 | orchestrator | Tuesday 17 March 2026 01:04:31 +0000 (0:00:00.613) 0:00:33.333 ********* 2026-03-17 01:05:10.642549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-17 01:05:10.642556 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:05:10.642574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-17 01:05:10.642585 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:05:10.642592 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-17 01:05:10.642599 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:05:10.642606 | orchestrator | 2026-03-17 01:05:10.642612 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-03-17 01:05:10.642619 | orchestrator | Tuesday 17 March 2026 01:04:31 +0000 (0:00:00.576) 0:00:33.910 ********* 2026-03-17 01:05:10.642630 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-17 01:05:10.642637 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-17 01:05:10.642647 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-17 01:05:10.642658 | orchestrator | 2026-03-17 01:05:10.642665 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-03-17 01:05:10.642671 | orchestrator | Tuesday 17 March 2026 01:04:33 +0000 (0:00:01.911) 0:00:35.822 ********* 2026-03-17 01:05:10.642678 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-17 01:05:10.642685 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-17 01:05:10.642696 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-17 01:05:10.642703 | orchestrator | 2026-03-17 01:05:10.642710 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-03-17 01:05:10.642716 | orchestrator | Tuesday 17 March 2026 01:04:37 +0000 (0:00:03.822) 0:00:39.645 ********* 2026-03-17 01:05:10.642723 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-17 01:05:10.642732 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-17 01:05:10.642739 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-17 01:05:10.642745 | orchestrator | 2026-03-17 01:05:10.642752 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-03-17 01:05:10.642762 | orchestrator | Tuesday 17 March 2026 01:04:39 +0000 (0:00:01.967) 0:00:41.613 ********* 2026-03-17 01:05:10.642769 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:05:10.642775 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:05:10.642782 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:05:10.642788 | orchestrator | 2026-03-17 01:05:10.642795 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-03-17 01:05:10.642801 | orchestrator | Tuesday 17 March 2026 01:04:40 +0000 (0:00:01.509) 0:00:43.123 ********* 2026-03-17 01:05:10.642808 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-17 01:05:10.642815 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:05:10.642821 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-17 01:05:10.642828 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:05:10.642838 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-17 01:05:10.642845 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:05:10.642852 | orchestrator | 2026-03-17 01:05:10.642860 | orchestrator | TASK [placement : Check placement containers] ********************************** 2026-03-17 01:05:10.642869 | orchestrator | Tuesday 17 March 2026 01:04:41 +0000 (0:00:00.606) 0:00:43.729 ********* 2026-03-17 01:05:10.642879 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-17 01:05:10.642893 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-17 01:05:10.642901 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-17 01:05:10.642908 | orchestrator | 2026-03-17 01:05:10.642917 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-03-17 01:05:10.642924 | orchestrator | Tuesday 17 March 2026 01:04:43 +0000 (0:00:01.789) 0:00:45.518 ********* 2026-03-17 01:05:10.642931 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:05:10.642938 | orchestrator | 2026-03-17 01:05:10.642946 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-03-17 01:05:10.642953 | orchestrator | Tuesday 17 March 2026 01:04:45 +0000 (0:00:02.318) 0:00:47.837 ********* 2026-03-17 01:05:10.642961 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:05:10.642969 | orchestrator | 2026-03-17 01:05:10.642976 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-03-17 01:05:10.642983 | orchestrator | Tuesday 17 March 2026 01:04:48 +0000 (0:00:02.358) 0:00:50.196 ********* 2026-03-17 01:05:10.642990 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:05:10.642997 | orchestrator | 2026-03-17 01:05:10.643004 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-17 01:05:10.643013 | orchestrator | Tuesday 17 March 2026 01:04:59 +0000 (0:00:11.661) 0:01:01.857 ********* 2026-03-17 01:05:10.643020 | orchestrator | 2026-03-17 01:05:10.643027 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-17 01:05:10.643034 | orchestrator | Tuesday 17 March 2026 01:04:59 +0000 (0:00:00.068) 0:01:01.926 ********* 2026-03-17 01:05:10.643041 | orchestrator | 2026-03-17 01:05:10.643052 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-17 01:05:10.643065 | orchestrator | Tuesday 17 March 2026 01:04:59 +0000 (0:00:00.070) 0:01:01.997 ********* 2026-03-17 01:05:10.643072 | orchestrator | 2026-03-17 01:05:10.643079 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-03-17 01:05:10.643086 | orchestrator | Tuesday 17 March 2026 01:04:59 +0000 (0:00:00.072) 0:01:02.070 ********* 2026-03-17 01:05:10.643094 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:05:10.643101 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:05:10.643110 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:05:10.643117 | orchestrator | 2026-03-17 01:05:10.643124 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 01:05:10.643132 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-17 01:05:10.643140 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-17 01:05:10.643147 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-17 01:05:10.643157 | orchestrator | 2026-03-17 01:05:10.643164 | orchestrator | 2026-03-17 01:05:10.643174 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 01:05:10.643181 | orchestrator | Tuesday 17 March 2026 01:05:07 +0000 (0:00:07.860) 0:01:09.930 ********* 2026-03-17 01:05:10.643188 | orchestrator | =============================================================================== 2026-03-17 01:05:10.643196 | orchestrator | placement : Running placement bootstrap container ---------------------- 11.66s 2026-03-17 01:05:10.643203 | orchestrator | placement : Restart placement-api container ----------------------------- 7.86s 2026-03-17 01:05:10.643210 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 7.29s 2026-03-17 01:05:10.643217 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.08s 2026-03-17 01:05:10.643224 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.98s 2026-03-17 01:05:10.643231 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 3.85s 2026-03-17 01:05:10.643239 | orchestrator | placement : Copying over placement.conf --------------------------------- 3.82s 2026-03-17 01:05:10.643246 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.75s 2026-03-17 01:05:10.643253 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.19s 2026-03-17 01:05:10.643261 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.36s 2026-03-17 01:05:10.643269 | orchestrator | placement : Creating placement databases -------------------------------- 2.32s 2026-03-17 01:05:10.643276 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.97s 2026-03-17 01:05:10.643282 | orchestrator | placement : Copying over config.json files for services ----------------- 1.91s 2026-03-17 01:05:10.643289 | orchestrator | placement : Check placement containers ---------------------------------- 1.79s 2026-03-17 01:05:10.643295 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.51s 2026-03-17 01:05:10.643302 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.48s 2026-03-17 01:05:10.643309 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.44s 2026-03-17 01:05:10.643315 | orchestrator | placement : include_tasks ----------------------------------------------- 0.77s 2026-03-17 01:05:10.643322 | orchestrator | placement : include_tasks ----------------------------------------------- 0.66s 2026-03-17 01:05:10.643329 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 0.61s 2026-03-17 01:05:10.643335 | orchestrator | 2026-03-17 01:05:10 | INFO  | Task 04a34d11-e60e-4ea3-9d34-0dedf410e87c is in state STARTED 2026-03-17 01:05:10.643342 | orchestrator | 2026-03-17 01:05:10 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:05:13.683760 | orchestrator | 2026-03-17 01:05:13 | INFO  | Task e3e68071-ee69-40e5-85b5-80cdaf3bb767 is in state STARTED 2026-03-17 01:05:13.684340 | orchestrator | 2026-03-17 01:05:13 | INFO  | Task c1f2eaee-b1db-4d9b-a3f1-615c59e641bf is in state STARTED 2026-03-17 01:05:13.687347 | orchestrator | 2026-03-17 01:05:13 | INFO  | Task 87e1a399-d6c0-4a2a-b3e6-82a031e1c630 is in state STARTED 2026-03-17 01:05:13.689516 | orchestrator | 2026-03-17 01:05:13 | INFO  | Task 04a34d11-e60e-4ea3-9d34-0dedf410e87c is in state SUCCESS 2026-03-17 01:05:13.690353 | orchestrator | 2026-03-17 01:05:13 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:05:16.741051 | orchestrator | 2026-03-17 01:05:16 | INFO  | Task e3e68071-ee69-40e5-85b5-80cdaf3bb767 is in state STARTED 2026-03-17 01:05:16.743379 | orchestrator | 2026-03-17 01:05:16 | INFO  | Task c1f2eaee-b1db-4d9b-a3f1-615c59e641bf is in state STARTED 2026-03-17 01:05:16.745054 | orchestrator | 2026-03-17 01:05:16 | INFO  | Task 87e1a399-d6c0-4a2a-b3e6-82a031e1c630 is in state STARTED 2026-03-17 01:05:16.746707 | orchestrator | 2026-03-17 01:05:16 | INFO  | Task 13e17ca2-022e-4adb-aab8-c9087b7d146a is in state STARTED 2026-03-17 01:05:16.747356 | orchestrator | 2026-03-17 01:05:16 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:05:19.787446 | orchestrator | 2026-03-17 01:05:19 | INFO  | Task e3e68071-ee69-40e5-85b5-80cdaf3bb767 is in state STARTED 2026-03-17 01:05:19.788024 | orchestrator | 2026-03-17 01:05:19 | INFO  | Task c1f2eaee-b1db-4d9b-a3f1-615c59e641bf is in state STARTED 2026-03-17 01:05:19.788833 | orchestrator | 2026-03-17 01:05:19 | INFO  | Task 87e1a399-d6c0-4a2a-b3e6-82a031e1c630 is in state STARTED 2026-03-17 01:05:19.790152 | orchestrator | 2026-03-17 01:05:19 | INFO  | Task 13e17ca2-022e-4adb-aab8-c9087b7d146a is in state STARTED 2026-03-17 01:05:19.790174 | orchestrator | 2026-03-17 01:05:19 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:05:22.830843 | orchestrator | 2026-03-17 01:05:22 | INFO  | Task e3e68071-ee69-40e5-85b5-80cdaf3bb767 is in state STARTED 2026-03-17 01:05:22.832444 | orchestrator | 2026-03-17 01:05:22 | INFO  | Task c1f2eaee-b1db-4d9b-a3f1-615c59e641bf is in state STARTED 2026-03-17 01:05:22.834120 | orchestrator | 2026-03-17 01:05:22 | INFO  | Task 87e1a399-d6c0-4a2a-b3e6-82a031e1c630 is in state STARTED 2026-03-17 01:05:22.835600 | orchestrator | 2026-03-17 01:05:22 | INFO  | Task 13e17ca2-022e-4adb-aab8-c9087b7d146a is in state STARTED 2026-03-17 01:05:22.835643 | orchestrator | 2026-03-17 01:05:22 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:05:25.864004 | orchestrator | 2026-03-17 01:05:25 | INFO  | Task e3e68071-ee69-40e5-85b5-80cdaf3bb767 is in state STARTED 2026-03-17 01:05:25.864050 | orchestrator | 2026-03-17 01:05:25 | INFO  | Task c1f2eaee-b1db-4d9b-a3f1-615c59e641bf is in state STARTED 2026-03-17 01:05:25.865072 | orchestrator | 2026-03-17 01:05:25 | INFO  | Task 87e1a399-d6c0-4a2a-b3e6-82a031e1c630 is in state STARTED 2026-03-17 01:05:25.866416 | orchestrator | 2026-03-17 01:05:25 | INFO  | Task 13e17ca2-022e-4adb-aab8-c9087b7d146a is in state STARTED 2026-03-17 01:05:25.866459 | orchestrator | 2026-03-17 01:05:25 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:05:28.898756 | orchestrator | 2026-03-17 01:05:28 | INFO  | Task e3e68071-ee69-40e5-85b5-80cdaf3bb767 is in state SUCCESS 2026-03-17 01:05:28.900194 | orchestrator | 2026-03-17 01:05:28.900236 | orchestrator | 2026-03-17 01:05:28.900245 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-17 01:05:28.900253 | orchestrator | 2026-03-17 01:05:28.900257 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-17 01:05:28.900261 | orchestrator | Tuesday 17 March 2026 01:05:10 +0000 (0:00:00.180) 0:00:00.180 ********* 2026-03-17 01:05:28.900276 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:05:28.900281 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:05:28.900285 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:05:28.900289 | orchestrator | 2026-03-17 01:05:28.900293 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-17 01:05:28.900296 | orchestrator | Tuesday 17 March 2026 01:05:11 +0000 (0:00:00.290) 0:00:00.470 ********* 2026-03-17 01:05:28.900300 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-03-17 01:05:28.900304 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-03-17 01:05:28.900308 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-03-17 01:05:28.900312 | orchestrator | 2026-03-17 01:05:28.900316 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2026-03-17 01:05:28.900319 | orchestrator | 2026-03-17 01:05:28.900323 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2026-03-17 01:05:28.900327 | orchestrator | Tuesday 17 March 2026 01:05:11 +0000 (0:00:00.495) 0:00:00.965 ********* 2026-03-17 01:05:28.900331 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:05:28.900334 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:05:28.900338 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:05:28.900342 | orchestrator | 2026-03-17 01:05:28.900346 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 01:05:28.900350 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 01:05:28.900355 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 01:05:28.900360 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 01:05:28.900363 | orchestrator | 2026-03-17 01:05:28.900367 | orchestrator | 2026-03-17 01:05:28.900371 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 01:05:28.900376 | orchestrator | Tuesday 17 March 2026 01:05:12 +0000 (0:00:00.989) 0:00:01.955 ********* 2026-03-17 01:05:28.900382 | orchestrator | =============================================================================== 2026-03-17 01:05:28.900388 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.99s 2026-03-17 01:05:28.900394 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.50s 2026-03-17 01:05:28.900438 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.29s 2026-03-17 01:05:28.900445 | orchestrator | 2026-03-17 01:05:28.900452 | orchestrator | 2026-03-17 01:05:28.900458 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-17 01:05:28.900464 | orchestrator | 2026-03-17 01:05:28.900468 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-17 01:05:28.900471 | orchestrator | Tuesday 17 March 2026 01:02:35 +0000 (0:00:00.281) 0:00:00.281 ********* 2026-03-17 01:05:28.900475 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:05:28.900479 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:05:28.900491 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:05:28.900495 | orchestrator | 2026-03-17 01:05:28.900503 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-17 01:05:28.900507 | orchestrator | Tuesday 17 March 2026 01:02:35 +0000 (0:00:00.220) 0:00:00.501 ********* 2026-03-17 01:05:28.900512 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-03-17 01:05:28.900515 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-03-17 01:05:28.900519 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-03-17 01:05:28.900523 | orchestrator | 2026-03-17 01:05:28.900526 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-03-17 01:05:28.900530 | orchestrator | 2026-03-17 01:05:28.900534 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-17 01:05:28.900558 | orchestrator | Tuesday 17 March 2026 01:02:35 +0000 (0:00:00.257) 0:00:00.759 ********* 2026-03-17 01:05:28.900565 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:05:28.900571 | orchestrator | 2026-03-17 01:05:28.900583 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2026-03-17 01:05:28.900587 | orchestrator | Tuesday 17 March 2026 01:02:36 +0000 (0:00:00.460) 0:00:01.219 ********* 2026-03-17 01:05:28.900591 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2026-03-17 01:05:28.900595 | orchestrator | 2026-03-17 01:05:28.900598 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2026-03-17 01:05:28.900602 | orchestrator | Tuesday 17 March 2026 01:02:40 +0000 (0:00:03.599) 0:00:04.819 ********* 2026-03-17 01:05:28.900606 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2026-03-17 01:05:28.900610 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2026-03-17 01:05:28.900614 | orchestrator | 2026-03-17 01:05:28.900617 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2026-03-17 01:05:28.900621 | orchestrator | Tuesday 17 March 2026 01:02:47 +0000 (0:00:07.153) 0:00:11.973 ********* 2026-03-17 01:05:28.900625 | orchestrator | changed: [testbed-node-0] => (item=service) 2026-03-17 01:05:28.900629 | orchestrator | 2026-03-17 01:05:28.900632 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2026-03-17 01:05:28.900636 | orchestrator | Tuesday 17 March 2026 01:02:50 +0000 (0:00:03.240) 0:00:15.214 ********* 2026-03-17 01:05:28.900647 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2026-03-17 01:05:28.900651 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-17 01:05:28.900655 | orchestrator | 2026-03-17 01:05:28.900659 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2026-03-17 01:05:28.900663 | orchestrator | Tuesday 17 March 2026 01:02:54 +0000 (0:00:04.088) 0:00:19.303 ********* 2026-03-17 01:05:28.900667 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-17 01:05:28.900670 | orchestrator | 2026-03-17 01:05:28.900674 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2026-03-17 01:05:28.900678 | orchestrator | Tuesday 17 March 2026 01:02:58 +0000 (0:00:03.714) 0:00:23.017 ********* 2026-03-17 01:05:28.900682 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2026-03-17 01:05:28.900685 | orchestrator | 2026-03-17 01:05:28.900689 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-03-17 01:05:28.900693 | orchestrator | Tuesday 17 March 2026 01:03:02 +0000 (0:00:04.606) 0:00:27.624 ********* 2026-03-17 01:05:28.900698 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-17 01:05:28.900705 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-17 01:05:28.900714 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-17 01:05:28.900719 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-17 01:05:28.900728 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-17 01:05:28.900732 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-17 01:05:28.900736 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-17 01:05:28.900743 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-17 01:05:28.900748 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-17 01:05:28.900753 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-17 01:05:28.900760 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-17 01:05:28.900764 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-17 01:05:28.900768 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-17 01:05:28.900774 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-17 01:05:28.900785 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-17 01:05:28.900792 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:05:28.901030 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:05:28.901051 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:05:28.901058 | orchestrator | 2026-03-17 01:05:28.901064 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-03-17 01:05:28.901071 | orchestrator | Tuesday 17 March 2026 01:03:06 +0000 (0:00:03.681) 0:00:31.305 ********* 2026-03-17 01:05:28.901078 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:05:28.901084 | orchestrator | 2026-03-17 01:05:28.901091 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-03-17 01:05:28.901097 | orchestrator | Tuesday 17 March 2026 01:03:06 +0000 (0:00:00.205) 0:00:31.510 ********* 2026-03-17 01:05:28.901104 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:05:28.901110 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:05:28.901117 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:05:28.901123 | orchestrator | 2026-03-17 01:05:28.901130 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-17 01:05:28.901136 | orchestrator | Tuesday 17 March 2026 01:03:06 +0000 (0:00:00.253) 0:00:31.764 ********* 2026-03-17 01:05:28.901142 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:05:28.901155 | orchestrator | 2026-03-17 01:05:28.901159 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-03-17 01:05:28.901169 | orchestrator | Tuesday 17 March 2026 01:03:07 +0000 (0:00:00.445) 0:00:32.210 ********* 2026-03-17 01:05:28.901180 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-17 01:05:28.901185 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-17 01:05:28.901193 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-17 01:05:28.901235 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-17 01:05:28.901242 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-17 01:05:28.901250 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-17 01:05:28.901254 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-17 01:05:28.901258 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-17 01:05:28.901264 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-17 01:05:28.901268 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-17 01:05:28.901276 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-17 01:05:28.901280 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-17 01:05:28.901287 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-17 01:05:28.901291 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-17 01:05:28.901295 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-17 01:05:28.901318 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:05:28.901328 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:05:28.901339 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:05:28.901345 | orchestrator | 2026-03-17 01:05:28.901638 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-03-17 01:05:28.901653 | orchestrator | Tuesday 17 March 2026 01:03:13 +0000 (0:00:05.695) 0:00:37.906 ********* 2026-03-17 01:05:28.901791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-17 01:05:28.901801 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-17 01:05:28.901809 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-17 01:05:28.901822 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-17 01:05:28.901829 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-17 01:05:28.901851 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:05:28.901861 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:05:28.901865 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-17 01:05:28.901869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-17 01:05:28.901873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-17 01:05:28.901880 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-17 01:05:28.901890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-17 01:05:28.901914 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:05:28.901927 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:05:28.901934 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-17 01:05:28.901940 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-17 01:05:28.901958 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-17 01:05:28.901965 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-17 01:05:28.901981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-17 01:05:28.901988 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:05:28.901999 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:05:28.902005 | orchestrator | 2026-03-17 01:05:28.902057 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-03-17 01:05:28.902069 | orchestrator | Tuesday 17 March 2026 01:03:14 +0000 (0:00:01.047) 0:00:38.953 ********* 2026-03-17 01:05:28.902075 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-17 01:05:28.902082 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-17 01:05:28.902089 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-17 01:05:28.902096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-17 01:05:28.902106 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-17 01:05:28.902112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:05:28.902125 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:05:28.902165 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-17 01:05:28.902172 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-17 01:05:28.902178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-17 01:05:28.902185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-17 01:05:28.902194 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-17 01:05:28.902200 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:05:28.902211 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:05:28.902268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-17 01:05:28.902278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-17 01:05:28.902285 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-17 01:05:28.902291 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-17 01:05:28.902297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-17 01:05:28.902307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:05:28.902319 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:05:28.902326 | orchestrator | 2026-03-17 01:05:28.902373 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-03-17 01:05:28.902380 | orchestrator | Tuesday 17 March 2026 01:03:15 +0000 (0:00:01.001) 0:00:39.954 ********* 2026-03-17 01:05:28.902396 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-17 01:05:28.902401 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-17 01:05:28.902405 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-17 01:05:28.902409 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-17 01:05:28.902423 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-17 01:05:28.902439 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-17 01:05:28.902444 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-17 01:05:28.902448 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-17 01:05:28.902452 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-17 01:05:28.902456 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-17 01:05:28.902461 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-17 01:05:28.902468 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-17 01:05:28.902484 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-17 01:05:28.902489 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-17 01:05:28.902493 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-17 01:05:28.902496 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:05:28.902500 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:05:28.902509 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:05:28.902513 | orchestrator | 2026-03-17 01:05:28.902517 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-03-17 01:05:28.902521 | orchestrator | Tuesday 17 March 2026 01:03:20 +0000 (0:00:05.593) 0:00:45.548 ********* 2026-03-17 01:05:28.902535 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-17 01:05:28.902539 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-17 01:05:28.902557 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-17 01:05:28.902561 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-17 01:05:28.902570 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-17 01:05:28.902574 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-17 01:05:28.902581 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-17 01:05:28.902585 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-17 01:05:28.902589 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-17 01:05:28.902593 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-17 01:05:28.902599 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-17 01:05:28.902605 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-17 01:05:28.902609 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-17 01:05:28.902617 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-17 01:05:28.902622 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-17 01:05:28.902628 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:05:28.902634 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:05:28.902644 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:05:28.902650 | orchestrator | 2026-03-17 01:05:28.902657 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-03-17 01:05:28.902663 | orchestrator | Tuesday 17 March 2026 01:03:39 +0000 (0:00:18.530) 0:01:04.079 ********* 2026-03-17 01:05:28.902669 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-17 01:05:28.902676 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-17 01:05:28.902683 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-17 01:05:28.902687 | orchestrator | 2026-03-17 01:05:28.902690 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-03-17 01:05:28.902694 | orchestrator | Tuesday 17 March 2026 01:03:45 +0000 (0:00:06.126) 0:01:10.205 ********* 2026-03-17 01:05:28.902698 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-17 01:05:28.902712 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-17 01:05:28.902716 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-17 01:05:28.902719 | orchestrator | 2026-03-17 01:05:28.902723 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-03-17 01:05:28.902727 | orchestrator | Tuesday 17 March 2026 01:03:48 +0000 (0:00:03.072) 0:01:13.277 ********* 2026-03-17 01:05:28.902734 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-17 01:05:28.902738 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-17 01:05:28.902746 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-17 01:05:28.902750 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-17 01:05:28.902756 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-17 01:05:28.902762 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-17 01:05:28.902767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-17 01:05:28.902771 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-17 01:05:28.902777 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-17 01:05:28.902781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-17 01:05:28.902785 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-17 01:05:28.902791 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-17 01:05:28.902798 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-17 01:05:28.902802 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-17 01:05:28.902806 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-17 01:05:28.902814 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:05:28.902819 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:05:28.902825 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:05:28.902829 | orchestrator | 2026-03-17 01:05:28.902833 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-03-17 01:05:28.902836 | orchestrator | Tuesday 17 March 2026 01:03:52 +0000 (0:00:03.853) 0:01:17.131 ********* 2026-03-17 01:05:28.902843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-17 01:05:28.902847 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-17 01:05:28.902854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-17 01:05:28.902858 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-17 01:05:28.902862 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-17 01:05:28.902868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-17 01:05:28.902875 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-17 01:05:28.902879 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-17 01:05:28.902885 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-17 01:05:28.902889 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-17 01:05:28.902893 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-17 01:05:28.902899 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-17 01:05:28.902903 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-17 01:05:28.902910 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-17 01:05:28.902914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-17 01:05:28.902920 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:05:28.902924 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:05:28.902931 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:05:28.902938 | orchestrator | 2026-03-17 01:05:28.902944 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-17 01:05:28.902951 | orchestrator | Tuesday 17 March 2026 01:03:55 +0000 (0:00:02.854) 0:01:19.985 ********* 2026-03-17 01:05:28.902958 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:05:28.902965 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:05:28.902973 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:05:28.902979 | orchestrator | 2026-03-17 01:05:28.902986 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-03-17 01:05:28.902996 | orchestrator | Tuesday 17 March 2026 01:03:55 +0000 (0:00:00.551) 0:01:20.537 ********* 2026-03-17 01:05:28.903001 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-17 01:05:28.903008 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-17 01:05:28.903020 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-17 01:05:28.903027 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-17 01:05:28.903034 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-17 01:05:28.903041 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-17 01:05:28.903046 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-17 01:05:28.903054 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-17 01:05:28.903062 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-17 01:05:28.903067 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-17 01:05:28.903072 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:05:28.903076 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:05:28.903080 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:05:28.903085 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:05:28.903091 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-17 01:05:28.903099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-17 01:05:28.903106 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-17 01:05:28.903111 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-17 01:05:28.903115 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-17 01:05:28.903120 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:05:28.903124 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:05:28.903129 | orchestrator | 2026-03-17 01:05:28.903133 | orchestrator | TASK [designate : Check designate containers] ********************************** 2026-03-17 01:05:28.903137 | orchestrator | Tuesday 17 March 2026 01:03:57 +0000 (0:00:01.255) 0:01:21.792 ********* 2026-03-17 01:05:28.903145 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-17 01:05:28.903159 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-17 01:05:28.903174 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-17 01:05:28.903182 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-17 01:05:28.903189 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-17 01:05:28.903201 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-17 01:05:28.903208 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-17 01:05:28.903222 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-17 01:05:28.903229 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-17 01:05:28.903236 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-17 01:05:28.903242 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-17 01:05:28.903247 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-17 01:05:28.903253 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-17 01:05:28.903262 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-17 01:05:28.903269 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-17 01:05:28.903274 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:05:28.903278 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:05:28.903283 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:05:28.903287 | orchestrator | 2026-03-17 01:05:28.903292 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-17 01:05:28.903296 | orchestrator | Tuesday 17 March 2026 01:04:01 +0000 (0:00:04.309) 0:01:26.102 ********* 2026-03-17 01:05:28.903300 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:05:28.903304 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:05:28.903309 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:05:28.903313 | orchestrator | 2026-03-17 01:05:28.903317 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2026-03-17 01:05:28.903322 | orchestrator | Tuesday 17 March 2026 01:04:01 +0000 (0:00:00.399) 0:01:26.501 ********* 2026-03-17 01:05:28.903326 | orchestrator | changed: [testbed-node-0] => (item=designate) 2026-03-17 01:05:28.903334 | orchestrator | 2026-03-17 01:05:28.903338 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2026-03-17 01:05:28.903342 | orchestrator | Tuesday 17 March 2026 01:04:03 +0000 (0:00:02.262) 0:01:28.763 ********* 2026-03-17 01:05:28.903347 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-17 01:05:28.903351 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2026-03-17 01:05:28.903355 | orchestrator | 2026-03-17 01:05:28.903359 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-03-17 01:05:28.903364 | orchestrator | Tuesday 17 March 2026 01:04:06 +0000 (0:00:02.610) 0:01:31.374 ********* 2026-03-17 01:05:28.903370 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:05:28.903375 | orchestrator | 2026-03-17 01:05:28.903379 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-17 01:05:28.903384 | orchestrator | Tuesday 17 March 2026 01:04:21 +0000 (0:00:14.674) 0:01:46.048 ********* 2026-03-17 01:05:28.903388 | orchestrator | 2026-03-17 01:05:28.903392 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-17 01:05:28.903396 | orchestrator | Tuesday 17 March 2026 01:04:21 +0000 (0:00:00.063) 0:01:46.111 ********* 2026-03-17 01:05:28.903401 | orchestrator | 2026-03-17 01:05:28.903405 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-17 01:05:28.903409 | orchestrator | Tuesday 17 March 2026 01:04:21 +0000 (0:00:00.059) 0:01:46.171 ********* 2026-03-17 01:05:28.903414 | orchestrator | 2026-03-17 01:05:28.903418 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-03-17 01:05:28.903423 | orchestrator | Tuesday 17 March 2026 01:04:21 +0000 (0:00:00.061) 0:01:46.232 ********* 2026-03-17 01:05:28.903427 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:05:28.903431 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:05:28.903435 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:05:28.903440 | orchestrator | 2026-03-17 01:05:28.903444 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-03-17 01:05:28.903448 | orchestrator | Tuesday 17 March 2026 01:04:34 +0000 (0:00:13.024) 0:01:59.258 ********* 2026-03-17 01:05:28.903455 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:05:28.903460 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:05:28.903464 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:05:28.903468 | orchestrator | 2026-03-17 01:05:28.903473 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-03-17 01:05:28.903477 | orchestrator | Tuesday 17 March 2026 01:04:41 +0000 (0:00:07.173) 0:02:06.431 ********* 2026-03-17 01:05:28.903481 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:05:28.903486 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:05:28.903490 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:05:28.903494 | orchestrator | 2026-03-17 01:05:28.903499 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-03-17 01:05:28.903503 | orchestrator | Tuesday 17 March 2026 01:04:53 +0000 (0:00:11.514) 0:02:17.945 ********* 2026-03-17 01:05:28.903507 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:05:28.903512 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:05:28.903516 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:05:28.903521 | orchestrator | 2026-03-17 01:05:28.903525 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-03-17 01:05:28.903530 | orchestrator | Tuesday 17 March 2026 01:04:58 +0000 (0:00:04.871) 0:02:22.817 ********* 2026-03-17 01:05:28.903536 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:05:28.903555 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:05:28.903562 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:05:28.903569 | orchestrator | 2026-03-17 01:05:28.903575 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-03-17 01:05:28.903582 | orchestrator | Tuesday 17 March 2026 01:05:08 +0000 (0:00:10.411) 0:02:33.229 ********* 2026-03-17 01:05:28.903589 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:05:28.903596 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:05:28.903607 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:05:28.903612 | orchestrator | 2026-03-17 01:05:28.903617 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-03-17 01:05:28.903621 | orchestrator | Tuesday 17 March 2026 01:05:19 +0000 (0:00:10.674) 0:02:43.903 ********* 2026-03-17 01:05:28.903626 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:05:28.903630 | orchestrator | 2026-03-17 01:05:28.903634 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 01:05:28.903639 | orchestrator | testbed-node-0 : ok=29  changed=24  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-17 01:05:28.903644 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-17 01:05:28.903649 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-17 01:05:28.903653 | orchestrator | 2026-03-17 01:05:28.903660 | orchestrator | 2026-03-17 01:05:28.903667 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 01:05:28.903674 | orchestrator | Tuesday 17 March 2026 01:05:26 +0000 (0:00:07.076) 0:02:50.980 ********* 2026-03-17 01:05:28.903681 | orchestrator | =============================================================================== 2026-03-17 01:05:28.903686 | orchestrator | designate : Copying over designate.conf -------------------------------- 18.53s 2026-03-17 01:05:28.903691 | orchestrator | designate : Running Designate bootstrap container ---------------------- 14.67s 2026-03-17 01:05:28.903695 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 13.03s 2026-03-17 01:05:28.903700 | orchestrator | designate : Restart designate-central container ------------------------ 11.51s 2026-03-17 01:05:28.903704 | orchestrator | designate : Restart designate-worker container ------------------------- 10.67s 2026-03-17 01:05:28.903708 | orchestrator | designate : Restart designate-mdns container --------------------------- 10.41s 2026-03-17 01:05:28.903712 | orchestrator | designate : Restart designate-api container ----------------------------- 7.17s 2026-03-17 01:05:28.903716 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 7.15s 2026-03-17 01:05:28.903721 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.08s 2026-03-17 01:05:28.903725 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 6.13s 2026-03-17 01:05:28.903729 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 5.70s 2026-03-17 01:05:28.903736 | orchestrator | designate : Copying over config.json files for services ----------------- 5.59s 2026-03-17 01:05:28.903741 | orchestrator | designate : Restart designate-producer container ------------------------ 4.87s 2026-03-17 01:05:28.903745 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.61s 2026-03-17 01:05:28.903749 | orchestrator | designate : Check designate containers ---------------------------------- 4.31s 2026-03-17 01:05:28.903754 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.09s 2026-03-17 01:05:28.903760 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 3.85s 2026-03-17 01:05:28.903766 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.71s 2026-03-17 01:05:28.903775 | orchestrator | designate : Ensuring config directories exist --------------------------- 3.68s 2026-03-17 01:05:28.903784 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.60s 2026-03-17 01:05:28.903790 | orchestrator | 2026-03-17 01:05:28 | INFO  | Task c7e6847f-c432-4552-a830-5a8485ae05be is in state STARTED 2026-03-17 01:05:28.903797 | orchestrator | 2026-03-17 01:05:28 | INFO  | Task c1f2eaee-b1db-4d9b-a3f1-615c59e641bf is in state STARTED 2026-03-17 01:05:28.903807 | orchestrator | 2026-03-17 01:05:28 | INFO  | Task 87e1a399-d6c0-4a2a-b3e6-82a031e1c630 is in state STARTED 2026-03-17 01:05:28.903818 | orchestrator | 2026-03-17 01:05:28 | INFO  | Task 13e17ca2-022e-4adb-aab8-c9087b7d146a is in state STARTED 2026-03-17 01:05:28.903824 | orchestrator | 2026-03-17 01:05:28 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:05:31.932070 | orchestrator | 2026-03-17 01:05:31 | INFO  | Task c7e6847f-c432-4552-a830-5a8485ae05be is in state STARTED 2026-03-17 01:05:31.932123 | orchestrator | 2026-03-17 01:05:31 | INFO  | Task c1f2eaee-b1db-4d9b-a3f1-615c59e641bf is in state STARTED 2026-03-17 01:05:31.932842 | orchestrator | 2026-03-17 01:05:31 | INFO  | Task 87e1a399-d6c0-4a2a-b3e6-82a031e1c630 is in state STARTED 2026-03-17 01:05:31.933591 | orchestrator | 2026-03-17 01:05:31 | INFO  | Task 13e17ca2-022e-4adb-aab8-c9087b7d146a is in state STARTED 2026-03-17 01:05:31.933616 | orchestrator | 2026-03-17 01:05:31 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:05:34.958729 | orchestrator | 2026-03-17 01:05:34 | INFO  | Task c7e6847f-c432-4552-a830-5a8485ae05be is in state STARTED 2026-03-17 01:05:34.962234 | orchestrator | 2026-03-17 01:05:34 | INFO  | Task c1f2eaee-b1db-4d9b-a3f1-615c59e641bf is in state STARTED 2026-03-17 01:05:34.962764 | orchestrator | 2026-03-17 01:05:34 | INFO  | Task 87e1a399-d6c0-4a2a-b3e6-82a031e1c630 is in state STARTED 2026-03-17 01:05:34.963408 | orchestrator | 2026-03-17 01:05:34 | INFO  | Task 13e17ca2-022e-4adb-aab8-c9087b7d146a is in state STARTED 2026-03-17 01:05:34.963436 | orchestrator | 2026-03-17 01:05:34 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:05:37.998258 | orchestrator | 2026-03-17 01:05:37 | INFO  | Task c7e6847f-c432-4552-a830-5a8485ae05be is in state STARTED 2026-03-17 01:05:37.998334 | orchestrator | 2026-03-17 01:05:37 | INFO  | Task c1f2eaee-b1db-4d9b-a3f1-615c59e641bf is in state STARTED 2026-03-17 01:05:38.018219 | orchestrator | 2026-03-17 01:05:38 | INFO  | Task 87e1a399-d6c0-4a2a-b3e6-82a031e1c630 is in state STARTED 2026-03-17 01:05:38.021818 | orchestrator | 2026-03-17 01:05:38 | INFO  | Task 13e17ca2-022e-4adb-aab8-c9087b7d146a is in state STARTED 2026-03-17 01:05:38.022904 | orchestrator | 2026-03-17 01:05:38 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:05:41.056190 | orchestrator | 2026-03-17 01:05:41 | INFO  | Task c7e6847f-c432-4552-a830-5a8485ae05be is in state STARTED 2026-03-17 01:05:41.056856 | orchestrator | 2026-03-17 01:05:41 | INFO  | Task c1f2eaee-b1db-4d9b-a3f1-615c59e641bf is in state STARTED 2026-03-17 01:05:41.058341 | orchestrator | 2026-03-17 01:05:41 | INFO  | Task 87e1a399-d6c0-4a2a-b3e6-82a031e1c630 is in state STARTED 2026-03-17 01:05:41.059225 | orchestrator | 2026-03-17 01:05:41 | INFO  | Task 13e17ca2-022e-4adb-aab8-c9087b7d146a is in state STARTED 2026-03-17 01:05:41.059277 | orchestrator | 2026-03-17 01:05:41 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:05:44.092285 | orchestrator | 2026-03-17 01:05:44 | INFO  | Task c7e6847f-c432-4552-a830-5a8485ae05be is in state STARTED 2026-03-17 01:05:44.093002 | orchestrator | 2026-03-17 01:05:44 | INFO  | Task c1f2eaee-b1db-4d9b-a3f1-615c59e641bf is in state STARTED 2026-03-17 01:05:44.093492 | orchestrator | 2026-03-17 01:05:44 | INFO  | Task 87e1a399-d6c0-4a2a-b3e6-82a031e1c630 is in state STARTED 2026-03-17 01:05:44.094288 | orchestrator | 2026-03-17 01:05:44 | INFO  | Task 13e17ca2-022e-4adb-aab8-c9087b7d146a is in state STARTED 2026-03-17 01:05:44.094379 | orchestrator | 2026-03-17 01:05:44 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:05:47.121339 | orchestrator | 2026-03-17 01:05:47 | INFO  | Task c7e6847f-c432-4552-a830-5a8485ae05be is in state STARTED 2026-03-17 01:05:47.121752 | orchestrator | 2026-03-17 01:05:47 | INFO  | Task c1f2eaee-b1db-4d9b-a3f1-615c59e641bf is in state STARTED 2026-03-17 01:05:47.122407 | orchestrator | 2026-03-17 01:05:47 | INFO  | Task 87e1a399-d6c0-4a2a-b3e6-82a031e1c630 is in state STARTED 2026-03-17 01:05:47.123133 | orchestrator | 2026-03-17 01:05:47 | INFO  | Task 13e17ca2-022e-4adb-aab8-c9087b7d146a is in state STARTED 2026-03-17 01:05:47.123214 | orchestrator | 2026-03-17 01:05:47 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:05:50.155542 | orchestrator | 2026-03-17 01:05:50 | INFO  | Task c7e6847f-c432-4552-a830-5a8485ae05be is in state STARTED 2026-03-17 01:05:50.156011 | orchestrator | 2026-03-17 01:05:50 | INFO  | Task c1f2eaee-b1db-4d9b-a3f1-615c59e641bf is in state STARTED 2026-03-17 01:05:50.156833 | orchestrator | 2026-03-17 01:05:50 | INFO  | Task 87e1a399-d6c0-4a2a-b3e6-82a031e1c630 is in state STARTED 2026-03-17 01:05:50.157630 | orchestrator | 2026-03-17 01:05:50 | INFO  | Task 13e17ca2-022e-4adb-aab8-c9087b7d146a is in state STARTED 2026-03-17 01:05:50.157650 | orchestrator | 2026-03-17 01:05:50 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:05:53.199605 | orchestrator | 2026-03-17 01:05:53 | INFO  | Task c7e6847f-c432-4552-a830-5a8485ae05be is in state STARTED 2026-03-17 01:05:53.199736 | orchestrator | 2026-03-17 01:05:53 | INFO  | Task c1f2eaee-b1db-4d9b-a3f1-615c59e641bf is in state STARTED 2026-03-17 01:05:53.200835 | orchestrator | 2026-03-17 01:05:53 | INFO  | Task 87e1a399-d6c0-4a2a-b3e6-82a031e1c630 is in state STARTED 2026-03-17 01:05:53.201696 | orchestrator | 2026-03-17 01:05:53 | INFO  | Task 13e17ca2-022e-4adb-aab8-c9087b7d146a is in state STARTED 2026-03-17 01:05:53.202830 | orchestrator | 2026-03-17 01:05:53 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:05:56.244385 | orchestrator | 2026-03-17 01:05:56 | INFO  | Task c7e6847f-c432-4552-a830-5a8485ae05be is in state STARTED 2026-03-17 01:05:56.245231 | orchestrator | 2026-03-17 01:05:56 | INFO  | Task c1f2eaee-b1db-4d9b-a3f1-615c59e641bf is in state STARTED 2026-03-17 01:05:56.246805 | orchestrator | 2026-03-17 01:05:56 | INFO  | Task 87e1a399-d6c0-4a2a-b3e6-82a031e1c630 is in state STARTED 2026-03-17 01:05:56.248350 | orchestrator | 2026-03-17 01:05:56 | INFO  | Task 13e17ca2-022e-4adb-aab8-c9087b7d146a is in state STARTED 2026-03-17 01:05:56.248382 | orchestrator | 2026-03-17 01:05:56 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:05:59.279497 | orchestrator | 2026-03-17 01:05:59 | INFO  | Task c7e6847f-c432-4552-a830-5a8485ae05be is in state STARTED 2026-03-17 01:05:59.280063 | orchestrator | 2026-03-17 01:05:59 | INFO  | Task c1f2eaee-b1db-4d9b-a3f1-615c59e641bf is in state STARTED 2026-03-17 01:05:59.280669 | orchestrator | 2026-03-17 01:05:59 | INFO  | Task 87e1a399-d6c0-4a2a-b3e6-82a031e1c630 is in state STARTED 2026-03-17 01:05:59.281227 | orchestrator | 2026-03-17 01:05:59 | INFO  | Task 13e17ca2-022e-4adb-aab8-c9087b7d146a is in state STARTED 2026-03-17 01:05:59.281282 | orchestrator | 2026-03-17 01:05:59 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:06:02.334189 | orchestrator | 2026-03-17 01:06:02 | INFO  | Task c7e6847f-c432-4552-a830-5a8485ae05be is in state STARTED 2026-03-17 01:06:02.335495 | orchestrator | 2026-03-17 01:06:02 | INFO  | Task c1f2eaee-b1db-4d9b-a3f1-615c59e641bf is in state STARTED 2026-03-17 01:06:02.336105 | orchestrator | 2026-03-17 01:06:02 | INFO  | Task 87e1a399-d6c0-4a2a-b3e6-82a031e1c630 is in state STARTED 2026-03-17 01:06:02.336901 | orchestrator | 2026-03-17 01:06:02 | INFO  | Task 13e17ca2-022e-4adb-aab8-c9087b7d146a is in state STARTED 2026-03-17 01:06:02.336921 | orchestrator | 2026-03-17 01:06:02 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:06:05.372073 | orchestrator | 2026-03-17 01:06:05 | INFO  | Task c7e6847f-c432-4552-a830-5a8485ae05be is in state STARTED 2026-03-17 01:06:05.372117 | orchestrator | 2026-03-17 01:06:05 | INFO  | Task c1f2eaee-b1db-4d9b-a3f1-615c59e641bf is in state STARTED 2026-03-17 01:06:05.372854 | orchestrator | 2026-03-17 01:06:05 | INFO  | Task 87e1a399-d6c0-4a2a-b3e6-82a031e1c630 is in state STARTED 2026-03-17 01:06:05.374184 | orchestrator | 2026-03-17 01:06:05 | INFO  | Task 13e17ca2-022e-4adb-aab8-c9087b7d146a is in state STARTED 2026-03-17 01:06:05.374225 | orchestrator | 2026-03-17 01:06:05 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:06:08.415601 | orchestrator | 2026-03-17 01:06:08 | INFO  | Task c7e6847f-c432-4552-a830-5a8485ae05be is in state SUCCESS 2026-03-17 01:06:08.415647 | orchestrator | 2026-03-17 01:06:08 | INFO  | Task c1f2eaee-b1db-4d9b-a3f1-615c59e641bf is in state STARTED 2026-03-17 01:06:08.415814 | orchestrator | 2026-03-17 01:06:08 | INFO  | Task 9b54a0bb-ea6a-4470-b27f-ffd98f9917ec is in state STARTED 2026-03-17 01:06:08.417291 | orchestrator | 2026-03-17 01:06:08 | INFO  | Task 87e1a399-d6c0-4a2a-b3e6-82a031e1c630 is in state STARTED 2026-03-17 01:06:08.417341 | orchestrator | 2026-03-17 01:06:08 | INFO  | Task 13e17ca2-022e-4adb-aab8-c9087b7d146a is in state STARTED 2026-03-17 01:06:08.417351 | orchestrator | 2026-03-17 01:06:08 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:06:11.446397 | orchestrator | 2026-03-17 01:06:11 | INFO  | Task c1f2eaee-b1db-4d9b-a3f1-615c59e641bf is in state STARTED 2026-03-17 01:06:11.448015 | orchestrator | 2026-03-17 01:06:11 | INFO  | Task 9b54a0bb-ea6a-4470-b27f-ffd98f9917ec is in state STARTED 2026-03-17 01:06:11.449190 | orchestrator | 2026-03-17 01:06:11 | INFO  | Task 87e1a399-d6c0-4a2a-b3e6-82a031e1c630 is in state STARTED 2026-03-17 01:06:11.449682 | orchestrator | 2026-03-17 01:06:11 | INFO  | Task 13e17ca2-022e-4adb-aab8-c9087b7d146a is in state STARTED 2026-03-17 01:06:11.449716 | orchestrator | 2026-03-17 01:06:11 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:06:14.492670 | orchestrator | 2026-03-17 01:06:14 | INFO  | Task c1f2eaee-b1db-4d9b-a3f1-615c59e641bf is in state STARTED 2026-03-17 01:06:14.494551 | orchestrator | 2026-03-17 01:06:14 | INFO  | Task 9b54a0bb-ea6a-4470-b27f-ffd98f9917ec is in state STARTED 2026-03-17 01:06:14.497706 | orchestrator | 2026-03-17 01:06:14 | INFO  | Task 87e1a399-d6c0-4a2a-b3e6-82a031e1c630 is in state STARTED 2026-03-17 01:06:14.500033 | orchestrator | 2026-03-17 01:06:14 | INFO  | Task 13e17ca2-022e-4adb-aab8-c9087b7d146a is in state STARTED 2026-03-17 01:06:14.500079 | orchestrator | 2026-03-17 01:06:14 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:06:17.546644 | orchestrator | 2026-03-17 01:06:17 | INFO  | Task c1f2eaee-b1db-4d9b-a3f1-615c59e641bf is in state STARTED 2026-03-17 01:06:17.547351 | orchestrator | 2026-03-17 01:06:17 | INFO  | Task 9b54a0bb-ea6a-4470-b27f-ffd98f9917ec is in state STARTED 2026-03-17 01:06:17.547378 | orchestrator | 2026-03-17 01:06:17 | INFO  | Task 87e1a399-d6c0-4a2a-b3e6-82a031e1c630 is in state STARTED 2026-03-17 01:06:17.548180 | orchestrator | 2026-03-17 01:06:17 | INFO  | Task 13e17ca2-022e-4adb-aab8-c9087b7d146a is in state STARTED 2026-03-17 01:06:17.548210 | orchestrator | 2026-03-17 01:06:17 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:06:20.657192 | orchestrator | 2026-03-17 01:06:20 | INFO  | Task c1f2eaee-b1db-4d9b-a3f1-615c59e641bf is in state STARTED 2026-03-17 01:06:20.658650 | orchestrator | 2026-03-17 01:06:20 | INFO  | Task 9b54a0bb-ea6a-4470-b27f-ffd98f9917ec is in state STARTED 2026-03-17 01:06:20.660258 | orchestrator | 2026-03-17 01:06:20 | INFO  | Task 87e1a399-d6c0-4a2a-b3e6-82a031e1c630 is in state STARTED 2026-03-17 01:06:20.660682 | orchestrator | 2026-03-17 01:06:20 | INFO  | Task 13e17ca2-022e-4adb-aab8-c9087b7d146a is in state STARTED 2026-03-17 01:06:20.660708 | orchestrator | 2026-03-17 01:06:20 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:06:23.704920 | orchestrator | 2026-03-17 01:06:23 | INFO  | Task c1f2eaee-b1db-4d9b-a3f1-615c59e641bf is in state STARTED 2026-03-17 01:06:23.706773 | orchestrator | 2026-03-17 01:06:23 | INFO  | Task 9b54a0bb-ea6a-4470-b27f-ffd98f9917ec is in state STARTED 2026-03-17 01:06:23.708380 | orchestrator | 2026-03-17 01:06:23 | INFO  | Task 87e1a399-d6c0-4a2a-b3e6-82a031e1c630 is in state STARTED 2026-03-17 01:06:23.710135 | orchestrator | 2026-03-17 01:06:23 | INFO  | Task 13e17ca2-022e-4adb-aab8-c9087b7d146a is in state STARTED 2026-03-17 01:06:23.710180 | orchestrator | 2026-03-17 01:06:23 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:06:26.744562 | orchestrator | 2026-03-17 01:06:26 | INFO  | Task c1f2eaee-b1db-4d9b-a3f1-615c59e641bf is in state STARTED 2026-03-17 01:06:26.746759 | orchestrator | 2026-03-17 01:06:26 | INFO  | Task 9b54a0bb-ea6a-4470-b27f-ffd98f9917ec is in state STARTED 2026-03-17 01:06:26.747908 | orchestrator | 2026-03-17 01:06:26 | INFO  | Task 87e1a399-d6c0-4a2a-b3e6-82a031e1c630 is in state SUCCESS 2026-03-17 01:06:26.748789 | orchestrator | 2026-03-17 01:06:26.748832 | orchestrator | 2026-03-17 01:06:26.748839 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-17 01:06:26.748846 | orchestrator | 2026-03-17 01:06:26.748852 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-17 01:06:26.748868 | orchestrator | Tuesday 17 March 2026 01:05:31 +0000 (0:00:00.472) 0:00:00.472 ********* 2026-03-17 01:06:26.748874 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:06:26.748879 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:06:26.748883 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:06:26.748887 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:06:26.748891 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:06:26.748894 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:06:26.748898 | orchestrator | ok: [testbed-manager] 2026-03-17 01:06:26.748902 | orchestrator | 2026-03-17 01:06:26.748906 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-17 01:06:26.748909 | orchestrator | Tuesday 17 March 2026 01:05:33 +0000 (0:00:01.519) 0:00:01.992 ********* 2026-03-17 01:06:26.748913 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2026-03-17 01:06:26.748917 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2026-03-17 01:06:26.748922 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2026-03-17 01:06:26.748928 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2026-03-17 01:06:26.748933 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2026-03-17 01:06:26.748938 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2026-03-17 01:06:26.748944 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2026-03-17 01:06:26.748948 | orchestrator | 2026-03-17 01:06:26.748954 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-03-17 01:06:26.748959 | orchestrator | 2026-03-17 01:06:26.748965 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2026-03-17 01:06:26.748970 | orchestrator | Tuesday 17 March 2026 01:05:35 +0000 (0:00:02.336) 0:00:04.329 ********* 2026-03-17 01:06:26.748976 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-03-17 01:06:26.748990 | orchestrator | 2026-03-17 01:06:26.748996 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2026-03-17 01:06:26.749017 | orchestrator | Tuesday 17 March 2026 01:05:38 +0000 (0:00:02.699) 0:00:07.029 ********* 2026-03-17 01:06:26.749024 | orchestrator | changed: [testbed-node-0] => (item=swift (object-store)) 2026-03-17 01:06:26.749030 | orchestrator | 2026-03-17 01:06:26.749036 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2026-03-17 01:06:26.749040 | orchestrator | Tuesday 17 March 2026 01:05:42 +0000 (0:00:04.036) 0:00:11.065 ********* 2026-03-17 01:06:26.749044 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2026-03-17 01:06:26.749049 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2026-03-17 01:06:26.749053 | orchestrator | 2026-03-17 01:06:26.749057 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2026-03-17 01:06:26.749060 | orchestrator | Tuesday 17 March 2026 01:05:48 +0000 (0:00:06.242) 0:00:17.308 ********* 2026-03-17 01:06:26.749064 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-17 01:06:26.749068 | orchestrator | 2026-03-17 01:06:26.749072 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2026-03-17 01:06:26.749076 | orchestrator | Tuesday 17 March 2026 01:05:51 +0000 (0:00:02.983) 0:00:20.292 ********* 2026-03-17 01:06:26.749080 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service) 2026-03-17 01:06:26.749084 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-17 01:06:26.749087 | orchestrator | 2026-03-17 01:06:26.749091 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2026-03-17 01:06:26.749095 | orchestrator | Tuesday 17 March 2026 01:05:55 +0000 (0:00:03.250) 0:00:23.543 ********* 2026-03-17 01:06:26.749099 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-17 01:06:26.749103 | orchestrator | changed: [testbed-node-0] => (item=ResellerAdmin) 2026-03-17 01:06:26.749107 | orchestrator | 2026-03-17 01:06:26.749111 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2026-03-17 01:06:26.749115 | orchestrator | Tuesday 17 March 2026 01:06:00 +0000 (0:00:05.547) 0:00:29.091 ********* 2026-03-17 01:06:26.749118 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service -> admin) 2026-03-17 01:06:26.749122 | orchestrator | 2026-03-17 01:06:26.749126 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 01:06:26.749129 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 01:06:26.749209 | orchestrator | testbed-node-0 : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 01:06:26.749217 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 01:06:26.749222 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 01:06:26.749415 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 01:06:26.749433 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 01:06:26.749438 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 01:06:26.749443 | orchestrator | 2026-03-17 01:06:26.749448 | orchestrator | 2026-03-17 01:06:26.749459 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 01:06:26.749465 | orchestrator | Tuesday 17 March 2026 01:06:06 +0000 (0:00:05.947) 0:00:35.038 ********* 2026-03-17 01:06:26.749470 | orchestrator | =============================================================================== 2026-03-17 01:06:26.749476 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.25s 2026-03-17 01:06:26.749504 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 5.95s 2026-03-17 01:06:26.749511 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 5.55s 2026-03-17 01:06:26.749514 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 4.04s 2026-03-17 01:06:26.749518 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.25s 2026-03-17 01:06:26.749521 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 2.98s 2026-03-17 01:06:26.749524 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 2.70s 2026-03-17 01:06:26.749527 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.34s 2026-03-17 01:06:26.749530 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.52s 2026-03-17 01:06:26.749533 | orchestrator | 2026-03-17 01:06:26.749578 | orchestrator | 2026-03-17 01:06:26.749584 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-17 01:06:26.749590 | orchestrator | 2026-03-17 01:06:26.749596 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-17 01:06:26.749600 | orchestrator | Tuesday 17 March 2026 01:04:42 +0000 (0:00:00.973) 0:00:00.973 ********* 2026-03-17 01:06:26.749604 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:06:26.749607 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:06:26.749611 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:06:26.749614 | orchestrator | 2026-03-17 01:06:26.749617 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-17 01:06:26.749620 | orchestrator | Tuesday 17 March 2026 01:04:43 +0000 (0:00:00.753) 0:00:01.726 ********* 2026-03-17 01:06:26.749623 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-03-17 01:06:26.749627 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-03-17 01:06:26.749630 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-03-17 01:06:26.749633 | orchestrator | 2026-03-17 01:06:26.749637 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-03-17 01:06:26.749640 | orchestrator | 2026-03-17 01:06:26.749643 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-17 01:06:26.749646 | orchestrator | Tuesday 17 March 2026 01:04:44 +0000 (0:00:00.546) 0:00:02.273 ********* 2026-03-17 01:06:26.749651 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:06:26.749657 | orchestrator | 2026-03-17 01:06:26.749665 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2026-03-17 01:06:26.749671 | orchestrator | Tuesday 17 March 2026 01:04:44 +0000 (0:00:00.786) 0:00:03.060 ********* 2026-03-17 01:06:26.749676 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2026-03-17 01:06:26.749681 | orchestrator | 2026-03-17 01:06:26.749685 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2026-03-17 01:06:26.749690 | orchestrator | Tuesday 17 March 2026 01:04:49 +0000 (0:00:04.149) 0:00:07.209 ********* 2026-03-17 01:06:26.749695 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2026-03-17 01:06:26.749700 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2026-03-17 01:06:26.749705 | orchestrator | 2026-03-17 01:06:26.749710 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2026-03-17 01:06:26.749715 | orchestrator | Tuesday 17 March 2026 01:04:54 +0000 (0:00:05.712) 0:00:12.922 ********* 2026-03-17 01:06:26.749720 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-17 01:06:26.749725 | orchestrator | 2026-03-17 01:06:26.749730 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2026-03-17 01:06:26.749735 | orchestrator | Tuesday 17 March 2026 01:04:57 +0000 (0:00:02.935) 0:00:15.857 ********* 2026-03-17 01:06:26.749741 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2026-03-17 01:06:26.749752 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-17 01:06:26.749758 | orchestrator | 2026-03-17 01:06:26.749764 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2026-03-17 01:06:26.749770 | orchestrator | Tuesday 17 March 2026 01:05:01 +0000 (0:00:03.714) 0:00:19.571 ********* 2026-03-17 01:06:26.749775 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-17 01:06:26.749780 | orchestrator | 2026-03-17 01:06:26.749785 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2026-03-17 01:06:26.749790 | orchestrator | Tuesday 17 March 2026 01:05:04 +0000 (0:00:03.289) 0:00:22.861 ********* 2026-03-17 01:06:26.749798 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2026-03-17 01:06:26.749804 | orchestrator | 2026-03-17 01:06:26.749809 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2026-03-17 01:06:26.749814 | orchestrator | Tuesday 17 March 2026 01:05:07 +0000 (0:00:03.017) 0:00:25.878 ********* 2026-03-17 01:06:26.749819 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:06:26.749824 | orchestrator | 2026-03-17 01:06:26.749830 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2026-03-17 01:06:26.749841 | orchestrator | Tuesday 17 March 2026 01:05:11 +0000 (0:00:03.324) 0:00:29.203 ********* 2026-03-17 01:06:26.749846 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:06:26.749850 | orchestrator | 2026-03-17 01:06:26.749853 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2026-03-17 01:06:26.749860 | orchestrator | Tuesday 17 March 2026 01:05:15 +0000 (0:00:04.060) 0:00:33.263 ********* 2026-03-17 01:06:26.749863 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:06:26.749866 | orchestrator | 2026-03-17 01:06:26.749869 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-03-17 01:06:26.749873 | orchestrator | Tuesday 17 March 2026 01:05:18 +0000 (0:00:03.000) 0:00:36.264 ********* 2026-03-17 01:06:26.749877 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-17 01:06:26.749884 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:06:26.749888 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-17 01:06:26.749950 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-17 01:06:26.749967 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:06:26.749972 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:06:26.749977 | orchestrator | 2026-03-17 01:06:26.749982 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-03-17 01:06:26.749987 | orchestrator | Tuesday 17 March 2026 01:05:20 +0000 (0:00:02.695) 0:00:38.960 ********* 2026-03-17 01:06:26.749992 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:06:26.749997 | orchestrator | 2026-03-17 01:06:26.750001 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-03-17 01:06:26.750006 | orchestrator | Tuesday 17 March 2026 01:05:20 +0000 (0:00:00.116) 0:00:39.077 ********* 2026-03-17 01:06:26.750011 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:06:26.750041 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:06:26.750047 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:06:26.750051 | orchestrator | 2026-03-17 01:06:26.750057 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-03-17 01:06:26.750062 | orchestrator | Tuesday 17 March 2026 01:05:21 +0000 (0:00:00.245) 0:00:39.322 ********* 2026-03-17 01:06:26.750067 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-17 01:06:26.750077 | orchestrator | 2026-03-17 01:06:26.750082 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-03-17 01:06:26.750086 | orchestrator | Tuesday 17 March 2026 01:05:21 +0000 (0:00:00.762) 0:00:40.085 ********* 2026-03-17 01:06:26.750092 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-17 01:06:26.750098 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-17 01:06:26.750118 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-17 01:06:26.750125 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:06:26.750131 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:06:26.750140 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:06:26.750145 | orchestrator | 2026-03-17 01:06:26.750149 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-03-17 01:06:26.750152 | orchestrator | Tuesday 17 March 2026 01:05:23 +0000 (0:00:01.988) 0:00:42.073 ********* 2026-03-17 01:06:26.750155 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:06:26.750159 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:06:26.750162 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:06:26.750165 | orchestrator | 2026-03-17 01:06:26.750168 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-17 01:06:26.750172 | orchestrator | Tuesday 17 March 2026 01:05:24 +0000 (0:00:00.355) 0:00:42.429 ********* 2026-03-17 01:06:26.750175 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:06:26.750178 | orchestrator | 2026-03-17 01:06:26.750181 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-03-17 01:06:26.750185 | orchestrator | Tuesday 17 March 2026 01:05:24 +0000 (0:00:00.471) 0:00:42.900 ********* 2026-03-17 01:06:26.750198 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-17 01:06:26.750206 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-17 01:06:26.750217 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-17 01:06:26.750223 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:06:26.750228 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:06:26.750240 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:06:26.750246 | orchestrator | 2026-03-17 01:06:26.750252 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-03-17 01:06:26.750257 | orchestrator | Tuesday 17 March 2026 01:05:26 +0000 (0:00:02.052) 0:00:44.953 ********* 2026-03-17 01:06:26.750263 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-17 01:06:26.750273 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-17 01:06:26.750278 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:06:26.750284 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-17 01:06:26.750289 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-17 01:06:26.750299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-17 01:06:26.750303 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:06:26.750307 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-17 01:06:26.750312 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:06:26.750316 | orchestrator | 2026-03-17 01:06:26.750319 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-03-17 01:06:26.750322 | orchestrator | Tuesday 17 March 2026 01:05:27 +0000 (0:00:00.953) 0:00:45.907 ********* 2026-03-17 01:06:26.750325 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-17 01:06:26.750329 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-17 01:06:26.750332 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:06:26.750338 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-17 01:06:26.750341 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-17 01:06:26.750347 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:06:26.750351 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-17 01:06:26.750371 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-17 01:06:26.750377 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:06:26.750383 | orchestrator | 2026-03-17 01:06:26.750388 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-03-17 01:06:26.750394 | orchestrator | Tuesday 17 March 2026 01:05:29 +0000 (0:00:01.811) 0:00:47.719 ********* 2026-03-17 01:06:26.750400 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-17 01:06:26.750412 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-17 01:06:26.750422 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-17 01:06:26.750427 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:06:26.750433 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:06:26.750439 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:06:26.750445 | orchestrator | 2026-03-17 01:06:26.750451 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-03-17 01:06:26.750456 | orchestrator | Tuesday 17 March 2026 01:05:32 +0000 (0:00:03.153) 0:00:50.872 ********* 2026-03-17 01:06:26.750469 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-17 01:06:26.750509 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-17 01:06:26.750517 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-17 01:06:26.750523 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:06:26.750529 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:06:26.750542 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:06:26.750556 | orchestrator | 2026-03-17 01:06:26.750562 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-03-17 01:06:26.750567 | orchestrator | Tuesday 17 March 2026 01:05:40 +0000 (0:00:07.839) 0:00:58.712 ********* 2026-03-17 01:06:26.750573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-17 01:06:26.750579 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-17 01:06:26.750585 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:06:26.750591 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-17 01:06:26.750597 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-17 01:06:26.750606 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:06:26.750618 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-17 01:06:26.750625 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-17 01:06:26.750631 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:06:26.750636 | orchestrator | 2026-03-17 01:06:26.750642 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2026-03-17 01:06:26.750648 | orchestrator | Tuesday 17 March 2026 01:05:41 +0000 (0:00:01.080) 0:00:59.792 ********* 2026-03-17 01:06:26.750655 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-17 01:06:26.750661 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-17 01:06:26.750670 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-17 01:06:26.750682 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:06:26.750688 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:06:26.750695 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:06:26.750700 | orchestrator | 2026-03-17 01:06:26.750706 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-17 01:06:26.750712 | orchestrator | Tuesday 17 March 2026 01:05:44 +0000 (0:00:02.764) 0:01:02.556 ********* 2026-03-17 01:06:26.750718 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:06:26.750723 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:06:26.750729 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:06:26.750734 | orchestrator | 2026-03-17 01:06:26.750740 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2026-03-17 01:06:26.750748 | orchestrator | Tuesday 17 March 2026 01:05:44 +0000 (0:00:00.497) 0:01:03.053 ********* 2026-03-17 01:06:26.750758 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:06:26.750768 | orchestrator | 2026-03-17 01:06:26.750778 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2026-03-17 01:06:26.750788 | orchestrator | Tuesday 17 March 2026 01:05:47 +0000 (0:00:02.132) 0:01:05.185 ********* 2026-03-17 01:06:26.750801 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:06:26.750811 | orchestrator | 2026-03-17 01:06:26.750821 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-03-17 01:06:26.750831 | orchestrator | Tuesday 17 March 2026 01:05:49 +0000 (0:00:02.384) 0:01:07.571 ********* 2026-03-17 01:06:26.750841 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:06:26.750851 | orchestrator | 2026-03-17 01:06:26.750862 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-17 01:06:26.750872 | orchestrator | Tuesday 17 March 2026 01:06:03 +0000 (0:00:13.637) 0:01:21.209 ********* 2026-03-17 01:06:26.750882 | orchestrator | 2026-03-17 01:06:26.750892 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-17 01:06:26.750900 | orchestrator | Tuesday 17 March 2026 01:06:03 +0000 (0:00:00.155) 0:01:21.365 ********* 2026-03-17 01:06:26.750905 | orchestrator | 2026-03-17 01:06:26.750911 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-17 01:06:26.750916 | orchestrator | Tuesday 17 March 2026 01:06:03 +0000 (0:00:00.048) 0:01:21.413 ********* 2026-03-17 01:06:26.750921 | orchestrator | 2026-03-17 01:06:26.750926 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-03-17 01:06:26.750932 | orchestrator | Tuesday 17 March 2026 01:06:03 +0000 (0:00:00.050) 0:01:21.464 ********* 2026-03-17 01:06:26.750937 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:06:26.750943 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:06:26.750949 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:06:26.750954 | orchestrator | 2026-03-17 01:06:26.750960 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-03-17 01:06:26.750969 | orchestrator | Tuesday 17 March 2026 01:06:16 +0000 (0:00:13.646) 0:01:35.110 ********* 2026-03-17 01:06:26.750975 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:06:26.750980 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:06:26.750985 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:06:26.750991 | orchestrator | 2026-03-17 01:06:26.750999 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 01:06:26.751005 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-17 01:06:26.751011 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-17 01:06:26.751017 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-17 01:06:26.751023 | orchestrator | 2026-03-17 01:06:26.751029 | orchestrator | 2026-03-17 01:06:26.751035 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 01:06:26.751041 | orchestrator | Tuesday 17 March 2026 01:06:24 +0000 (0:00:07.984) 0:01:43.095 ********* 2026-03-17 01:06:26.751047 | orchestrator | =============================================================================== 2026-03-17 01:06:26.751053 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 13.65s 2026-03-17 01:06:26.751059 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 13.64s 2026-03-17 01:06:26.751064 | orchestrator | magnum : Restart magnum-conductor container ----------------------------- 7.98s 2026-03-17 01:06:26.751070 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 7.84s 2026-03-17 01:06:26.751076 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 5.71s 2026-03-17 01:06:26.751081 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 4.15s 2026-03-17 01:06:26.751086 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.06s 2026-03-17 01:06:26.751091 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.71s 2026-03-17 01:06:26.751096 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.33s 2026-03-17 01:06:26.751105 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.29s 2026-03-17 01:06:26.751110 | orchestrator | magnum : Copying over config.json files for services -------------------- 3.15s 2026-03-17 01:06:26.751116 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 3.02s 2026-03-17 01:06:26.751121 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.00s 2026-03-17 01:06:26.751127 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 2.93s 2026-03-17 01:06:26.751133 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.76s 2026-03-17 01:06:26.751138 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 2.70s 2026-03-17 01:06:26.751143 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.39s 2026-03-17 01:06:26.751149 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.13s 2026-03-17 01:06:26.751154 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.05s 2026-03-17 01:06:26.751160 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 1.99s 2026-03-17 01:06:26.751165 | orchestrator | 2026-03-17 01:06:26 | INFO  | Task 13e17ca2-022e-4adb-aab8-c9087b7d146a is in state STARTED 2026-03-17 01:06:26.751172 | orchestrator | 2026-03-17 01:06:26 | INFO  | Task 0c747add-54b8-407e-aa40-bfb32ea89c30 is in state STARTED 2026-03-17 01:06:26.751177 | orchestrator | 2026-03-17 01:06:26 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:06:29.786358 | orchestrator | 2026-03-17 01:06:29 | INFO  | Task c1f2eaee-b1db-4d9b-a3f1-615c59e641bf is in state STARTED 2026-03-17 01:06:29.789400 | orchestrator | 2026-03-17 01:06:29 | INFO  | Task 9b54a0bb-ea6a-4470-b27f-ffd98f9917ec is in state STARTED 2026-03-17 01:06:29.789878 | orchestrator | 2026-03-17 01:06:29 | INFO  | Task 13e17ca2-022e-4adb-aab8-c9087b7d146a is in state STARTED 2026-03-17 01:06:29.793382 | orchestrator | 2026-03-17 01:06:29 | INFO  | Task 0c747add-54b8-407e-aa40-bfb32ea89c30 is in state STARTED 2026-03-17 01:06:29.793428 | orchestrator | 2026-03-17 01:06:29 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:06:32.833683 | orchestrator | 2026-03-17 01:06:32 | INFO  | Task c1f2eaee-b1db-4d9b-a3f1-615c59e641bf is in state STARTED 2026-03-17 01:06:32.835820 | orchestrator | 2026-03-17 01:06:32 | INFO  | Task 9b54a0bb-ea6a-4470-b27f-ffd98f9917ec is in state STARTED 2026-03-17 01:06:32.837407 | orchestrator | 2026-03-17 01:06:32 | INFO  | Task 13e17ca2-022e-4adb-aab8-c9087b7d146a is in state STARTED 2026-03-17 01:06:32.839059 | orchestrator | 2026-03-17 01:06:32 | INFO  | Task 0c747add-54b8-407e-aa40-bfb32ea89c30 is in state STARTED 2026-03-17 01:06:32.839375 | orchestrator | 2026-03-17 01:06:32 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:06:35.865701 | orchestrator | 2026-03-17 01:06:35 | INFO  | Task c1f2eaee-b1db-4d9b-a3f1-615c59e641bf is in state STARTED 2026-03-17 01:06:35.867352 | orchestrator | 2026-03-17 01:06:35 | INFO  | Task 9b54a0bb-ea6a-4470-b27f-ffd98f9917ec is in state STARTED 2026-03-17 01:06:35.868072 | orchestrator | 2026-03-17 01:06:35 | INFO  | Task 13e17ca2-022e-4adb-aab8-c9087b7d146a is in state STARTED 2026-03-17 01:06:35.868837 | orchestrator | 2026-03-17 01:06:35 | INFO  | Task 0c747add-54b8-407e-aa40-bfb32ea89c30 is in state STARTED 2026-03-17 01:06:35.868857 | orchestrator | 2026-03-17 01:06:35 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:06:38.899231 | orchestrator | 2026-03-17 01:06:38 | INFO  | Task c1f2eaee-b1db-4d9b-a3f1-615c59e641bf is in state STARTED 2026-03-17 01:06:38.900759 | orchestrator | 2026-03-17 01:06:38 | INFO  | Task 9b54a0bb-ea6a-4470-b27f-ffd98f9917ec is in state STARTED 2026-03-17 01:06:38.901669 | orchestrator | 2026-03-17 01:06:38 | INFO  | Task 13e17ca2-022e-4adb-aab8-c9087b7d146a is in state STARTED 2026-03-17 01:06:38.903411 | orchestrator | 2026-03-17 01:06:38 | INFO  | Task 0c747add-54b8-407e-aa40-bfb32ea89c30 is in state STARTED 2026-03-17 01:06:38.903439 | orchestrator | 2026-03-17 01:06:38 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:06:41.947609 | orchestrator | 2026-03-17 01:06:41 | INFO  | Task c1f2eaee-b1db-4d9b-a3f1-615c59e641bf is in state SUCCESS 2026-03-17 01:06:41.948482 | orchestrator | 2026-03-17 01:06:41.948546 | orchestrator | 2026-03-17 01:06:41.948557 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-17 01:06:41.948564 | orchestrator | 2026-03-17 01:06:41.948571 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-17 01:06:41.948578 | orchestrator | Tuesday 17 March 2026 01:02:35 +0000 (0:00:00.300) 0:00:00.300 ********* 2026-03-17 01:06:41.948585 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:06:41.948592 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:06:41.948599 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:06:41.948603 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:06:41.948607 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:06:41.948611 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:06:41.948615 | orchestrator | 2026-03-17 01:06:41.948619 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-17 01:06:41.948623 | orchestrator | Tuesday 17 March 2026 01:02:36 +0000 (0:00:00.544) 0:00:00.845 ********* 2026-03-17 01:06:41.948626 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-03-17 01:06:41.948631 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-03-17 01:06:41.948635 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-03-17 01:06:41.948638 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-03-17 01:06:41.948642 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-03-17 01:06:41.948646 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-03-17 01:06:41.948650 | orchestrator | 2026-03-17 01:06:41.948654 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-03-17 01:06:41.948657 | orchestrator | 2026-03-17 01:06:41.948661 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-17 01:06:41.948665 | orchestrator | Tuesday 17 March 2026 01:02:36 +0000 (0:00:00.642) 0:00:01.487 ********* 2026-03-17 01:06:41.948669 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 01:06:41.948674 | orchestrator | 2026-03-17 01:06:41.948678 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-03-17 01:06:41.948682 | orchestrator | Tuesday 17 March 2026 01:02:37 +0000 (0:00:00.889) 0:00:02.376 ********* 2026-03-17 01:06:41.948685 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:06:41.948689 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:06:41.948693 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:06:41.948697 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:06:41.948701 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:06:41.948704 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:06:41.948708 | orchestrator | 2026-03-17 01:06:41.948712 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-03-17 01:06:41.948716 | orchestrator | Tuesday 17 March 2026 01:02:38 +0000 (0:00:01.226) 0:00:03.603 ********* 2026-03-17 01:06:41.948720 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:06:41.948724 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:06:41.948727 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:06:41.948781 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:06:41.948785 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:06:41.948841 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:06:41.948846 | orchestrator | 2026-03-17 01:06:41.948850 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-03-17 01:06:41.948873 | orchestrator | Tuesday 17 March 2026 01:02:39 +0000 (0:00:00.988) 0:00:04.592 ********* 2026-03-17 01:06:41.948878 | orchestrator | ok: [testbed-node-0] => { 2026-03-17 01:06:41.948882 | orchestrator |  "changed": false, 2026-03-17 01:06:41.948885 | orchestrator |  "msg": "All assertions passed" 2026-03-17 01:06:41.948889 | orchestrator | } 2026-03-17 01:06:41.948893 | orchestrator | ok: [testbed-node-1] => { 2026-03-17 01:06:41.948897 | orchestrator |  "changed": false, 2026-03-17 01:06:41.948901 | orchestrator |  "msg": "All assertions passed" 2026-03-17 01:06:41.948905 | orchestrator | } 2026-03-17 01:06:41.948930 | orchestrator | ok: [testbed-node-2] => { 2026-03-17 01:06:41.948936 | orchestrator |  "changed": false, 2026-03-17 01:06:41.948945 | orchestrator |  "msg": "All assertions passed" 2026-03-17 01:06:41.949113 | orchestrator | } 2026-03-17 01:06:41.949121 | orchestrator | ok: [testbed-node-3] => { 2026-03-17 01:06:41.949130 | orchestrator |  "changed": false, 2026-03-17 01:06:41.949139 | orchestrator |  "msg": "All assertions passed" 2026-03-17 01:06:41.949144 | orchestrator | } 2026-03-17 01:06:41.949150 | orchestrator | ok: [testbed-node-4] => { 2026-03-17 01:06:41.949156 | orchestrator |  "changed": false, 2026-03-17 01:06:41.949162 | orchestrator |  "msg": "All assertions passed" 2026-03-17 01:06:41.949167 | orchestrator | } 2026-03-17 01:06:41.949172 | orchestrator | ok: [testbed-node-5] => { 2026-03-17 01:06:41.949178 | orchestrator |  "changed": false, 2026-03-17 01:06:41.949183 | orchestrator |  "msg": "All assertions passed" 2026-03-17 01:06:41.949188 | orchestrator | } 2026-03-17 01:06:41.949194 | orchestrator | 2026-03-17 01:06:41.949208 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-03-17 01:06:41.949215 | orchestrator | Tuesday 17 March 2026 01:02:40 +0000 (0:00:00.493) 0:00:05.086 ********* 2026-03-17 01:06:41.949221 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:06:41.949227 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:06:41.949324 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:06:41.949332 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:06:41.949336 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:06:41.949340 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:06:41.949343 | orchestrator | 2026-03-17 01:06:41.949347 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2026-03-17 01:06:41.949351 | orchestrator | Tuesday 17 March 2026 01:02:40 +0000 (0:00:00.582) 0:00:05.668 ********* 2026-03-17 01:06:41.949355 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2026-03-17 01:06:41.949360 | orchestrator | 2026-03-17 01:06:41.949366 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2026-03-17 01:06:41.949372 | orchestrator | Tuesday 17 March 2026 01:02:45 +0000 (0:00:04.148) 0:00:09.817 ********* 2026-03-17 01:06:41.949378 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2026-03-17 01:06:41.949385 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2026-03-17 01:06:41.949391 | orchestrator | 2026-03-17 01:06:41.949420 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2026-03-17 01:06:41.949428 | orchestrator | Tuesday 17 March 2026 01:02:51 +0000 (0:00:06.814) 0:00:16.631 ********* 2026-03-17 01:06:41.949434 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-17 01:06:41.949441 | orchestrator | 2026-03-17 01:06:41.949447 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2026-03-17 01:06:41.949453 | orchestrator | Tuesday 17 March 2026 01:02:55 +0000 (0:00:03.271) 0:00:19.903 ********* 2026-03-17 01:06:41.949457 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2026-03-17 01:06:41.949479 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-17 01:06:41.949483 | orchestrator | 2026-03-17 01:06:41.949487 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2026-03-17 01:06:41.949491 | orchestrator | Tuesday 17 March 2026 01:02:59 +0000 (0:00:04.616) 0:00:24.520 ********* 2026-03-17 01:06:41.949502 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-17 01:06:41.949506 | orchestrator | 2026-03-17 01:06:41.949510 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2026-03-17 01:06:41.949514 | orchestrator | Tuesday 17 March 2026 01:03:03 +0000 (0:00:03.386) 0:00:27.906 ********* 2026-03-17 01:06:41.949517 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2026-03-17 01:06:41.949521 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2026-03-17 01:06:41.949525 | orchestrator | 2026-03-17 01:06:41.949529 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-17 01:06:41.949532 | orchestrator | Tuesday 17 March 2026 01:03:11 +0000 (0:00:08.808) 0:00:36.715 ********* 2026-03-17 01:06:41.949536 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:06:41.949540 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:06:41.949544 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:06:41.949547 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:06:41.949551 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:06:41.949555 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:06:41.949559 | orchestrator | 2026-03-17 01:06:41.949562 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-03-17 01:06:41.949566 | orchestrator | Tuesday 17 March 2026 01:03:12 +0000 (0:00:00.482) 0:00:37.197 ********* 2026-03-17 01:06:41.949570 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:06:41.949574 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:06:41.949578 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:06:41.949581 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:06:41.949585 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:06:41.949589 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:06:41.949593 | orchestrator | 2026-03-17 01:06:41.949597 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-03-17 01:06:41.949600 | orchestrator | Tuesday 17 March 2026 01:03:14 +0000 (0:00:02.365) 0:00:39.563 ********* 2026-03-17 01:06:41.949604 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:06:41.949608 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:06:41.949612 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:06:41.949616 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:06:41.949620 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:06:41.949623 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:06:41.949627 | orchestrator | 2026-03-17 01:06:41.949631 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-03-17 01:06:41.949635 | orchestrator | Tuesday 17 March 2026 01:03:15 +0000 (0:00:00.990) 0:00:40.553 ********* 2026-03-17 01:06:41.949639 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:06:41.949642 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:06:41.949646 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:06:41.949650 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:06:41.949654 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:06:41.949657 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:06:41.949661 | orchestrator | 2026-03-17 01:06:41.949665 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-03-17 01:06:41.949671 | orchestrator | Tuesday 17 March 2026 01:03:18 +0000 (0:00:02.297) 0:00:42.850 ********* 2026-03-17 01:06:41.949680 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-17 01:06:41.949709 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-17 01:06:41.949714 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-17 01:06:41.949719 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-17 01:06:41.949724 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-17 01:06:41.949730 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-17 01:06:41.949737 | orchestrator | 2026-03-17 01:06:41.949741 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-03-17 01:06:41.949746 | orchestrator | Tuesday 17 March 2026 01:03:20 +0000 (0:00:02.489) 0:00:45.340 ********* 2026-03-17 01:06:41.949750 | orchestrator | [WARNING]: Skipped 2026-03-17 01:06:41.949754 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-03-17 01:06:41.949758 | orchestrator | due to this access issue: 2026-03-17 01:06:41.949762 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-03-17 01:06:41.949766 | orchestrator | a directory 2026-03-17 01:06:41.949770 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-17 01:06:41.949774 | orchestrator | 2026-03-17 01:06:41.949777 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-17 01:06:41.949792 | orchestrator | Tuesday 17 March 2026 01:03:21 +0000 (0:00:00.925) 0:00:46.265 ********* 2026-03-17 01:06:41.949797 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 01:06:41.949802 | orchestrator | 2026-03-17 01:06:41.949805 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-03-17 01:06:41.949809 | orchestrator | Tuesday 17 March 2026 01:03:22 +0000 (0:00:01.198) 0:00:47.463 ********* 2026-03-17 01:06:41.949813 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-17 01:06:41.949818 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-17 01:06:41.949822 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-17 01:06:41.949830 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-17 01:06:41.949847 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-17 01:06:41.949852 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-17 01:06:41.949856 | orchestrator | 2026-03-17 01:06:41.949860 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-03-17 01:06:41.949864 | orchestrator | Tuesday 17 March 2026 01:03:26 +0000 (0:00:03.813) 0:00:51.276 ********* 2026-03-17 01:06:41.949868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-17 01:06:41.949873 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:06:41.949879 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-17 01:06:41.949885 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:06:41.949890 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-17 01:06:41.949894 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:06:41.949909 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-17 01:06:41.949914 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:06:41.949918 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-17 01:06:41.949922 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:06:41.949926 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-17 01:06:41.949933 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:06:41.949937 | orchestrator | 2026-03-17 01:06:41.949941 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-03-17 01:06:41.949945 | orchestrator | Tuesday 17 March 2026 01:03:28 +0000 (0:00:02.329) 0:00:53.606 ********* 2026-03-17 01:06:41.949950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-17 01:06:41.949955 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:06:41.949962 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-17 01:06:41.949968 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:06:41.949973 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-17 01:06:41.949977 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:06:41.949982 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-17 01:06:41.949989 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:06:41.949993 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-17 01:06:41.949998 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:06:41.950007 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-17 01:06:41.950036 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:06:41.950042 | orchestrator | 2026-03-17 01:06:41.950046 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-03-17 01:06:41.950051 | orchestrator | Tuesday 17 March 2026 01:03:32 +0000 (0:00:03.166) 0:00:56.773 ********* 2026-03-17 01:06:41.950055 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:06:41.950059 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:06:41.950064 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:06:41.950068 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:06:41.950073 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:06:41.950077 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:06:41.950081 | orchestrator | 2026-03-17 01:06:41.950086 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-03-17 01:06:41.950095 | orchestrator | Tuesday 17 March 2026 01:03:34 +0000 (0:00:02.672) 0:00:59.446 ********* 2026-03-17 01:06:41.950101 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:06:41.950105 | orchestrator | 2026-03-17 01:06:41.950109 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-03-17 01:06:41.950162 | orchestrator | Tuesday 17 March 2026 01:03:34 +0000 (0:00:00.183) 0:00:59.629 ********* 2026-03-17 01:06:41.950169 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:06:41.950181 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:06:41.950186 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:06:41.950190 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:06:41.950195 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:06:41.950199 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:06:41.950204 | orchestrator | 2026-03-17 01:06:41.950208 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-03-17 01:06:41.950213 | orchestrator | Tuesday 17 March 2026 01:03:35 +0000 (0:00:00.463) 0:01:00.093 ********* 2026-03-17 01:06:41.950217 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-17 01:06:41.950227 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:06:41.950232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-17 01:06:41.950236 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:06:41.950243 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-17 01:06:41.950247 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:06:41.950252 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-17 01:06:41.950257 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:06:41.950267 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-17 01:06:41.950274 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:06:41.950279 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-17 01:06:41.950283 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:06:41.950287 | orchestrator | 2026-03-17 01:06:41.950293 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-03-17 01:06:41.950297 | orchestrator | Tuesday 17 March 2026 01:03:38 +0000 (0:00:02.771) 0:01:02.865 ********* 2026-03-17 01:06:41.950302 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-17 01:06:41.950309 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-17 01:06:41.950317 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-17 01:06:41.950322 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-17 01:06:41.950330 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-17 01:06:41.950335 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-17 01:06:41.950340 | orchestrator | 2026-03-17 01:06:41.950344 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-03-17 01:06:41.950349 | orchestrator | Tuesday 17 March 2026 01:03:41 +0000 (0:00:03.210) 0:01:06.075 ********* 2026-03-17 01:06:41.950355 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-17 01:06:41.950363 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-17 01:06:41.950371 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-17 01:06:41.950376 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-17 01:06:41.950380 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-17 01:06:41.950386 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-17 01:06:41.950390 | orchestrator | 2026-03-17 01:06:41.950394 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-03-17 01:06:41.950398 | orchestrator | Tuesday 17 March 2026 01:03:48 +0000 (0:00:07.051) 0:01:13.127 ********* 2026-03-17 01:06:41.950406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-17 01:06:41.950413 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:06:41.950417 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-17 01:06:41.950421 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:06:41.950425 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-17 01:06:41.950429 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:06:41.950436 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-17 01:06:41.950440 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:06:41.950444 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-17 01:06:41.950451 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:06:41.950457 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-17 01:06:41.950476 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:06:41.950481 | orchestrator | 2026-03-17 01:06:41.950485 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-03-17 01:06:41.950489 | orchestrator | Tuesday 17 March 2026 01:03:50 +0000 (0:00:02.381) 0:01:15.508 ********* 2026-03-17 01:06:41.950493 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:06:41.950497 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:06:41.950500 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:06:41.950504 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:06:41.950508 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:06:41.950512 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:06:41.950516 | orchestrator | 2026-03-17 01:06:41.950520 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-03-17 01:06:41.950524 | orchestrator | Tuesday 17 March 2026 01:03:53 +0000 (0:00:02.806) 0:01:18.315 ********* 2026-03-17 01:06:41.950528 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-17 01:06:41.950532 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:06:41.950536 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-17 01:06:41.950541 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:06:41.950547 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-17 01:06:41.950554 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:06:41.950562 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-17 01:06:41.950566 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-17 01:06:41.950571 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-17 01:06:41.950578 | orchestrator | 2026-03-17 01:06:41.950584 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2026-03-17 01:06:41.950591 | orchestrator | Tuesday 17 March 2026 01:03:57 +0000 (0:00:04.223) 0:01:22.538 ********* 2026-03-17 01:06:41.950601 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:06:41.950608 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:06:41.950614 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:06:41.950620 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:06:41.950626 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:06:41.950631 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:06:41.950636 | orchestrator | 2026-03-17 01:06:41.950642 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-03-17 01:06:41.950648 | orchestrator | Tuesday 17 March 2026 01:03:59 +0000 (0:00:02.135) 0:01:24.673 ********* 2026-03-17 01:06:41.950659 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:06:41.950666 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:06:41.950671 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:06:41.950677 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:06:41.950683 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:06:41.950688 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:06:41.950694 | orchestrator | 2026-03-17 01:06:41.950705 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-03-17 01:06:41.950711 | orchestrator | Tuesday 17 March 2026 01:04:01 +0000 (0:00:01.905) 0:01:26.579 ********* 2026-03-17 01:06:41.950718 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:06:41.950724 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:06:41.950730 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:06:41.950736 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:06:41.950743 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:06:41.950749 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:06:41.950755 | orchestrator | 2026-03-17 01:06:41.950761 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-03-17 01:06:41.950767 | orchestrator | Tuesday 17 March 2026 01:04:03 +0000 (0:00:01.788) 0:01:28.368 ********* 2026-03-17 01:06:41.950774 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:06:41.950780 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:06:41.950787 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:06:41.950792 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:06:41.950796 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:06:41.950800 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:06:41.950803 | orchestrator | 2026-03-17 01:06:41.950807 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-03-17 01:06:41.950811 | orchestrator | Tuesday 17 March 2026 01:04:05 +0000 (0:00:01.854) 0:01:30.223 ********* 2026-03-17 01:06:41.950815 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:06:41.950819 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:06:41.950823 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:06:41.950827 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:06:41.950836 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:06:41.950840 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:06:41.950844 | orchestrator | 2026-03-17 01:06:41.950847 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-03-17 01:06:41.950851 | orchestrator | Tuesday 17 March 2026 01:04:07 +0000 (0:00:01.972) 0:01:32.195 ********* 2026-03-17 01:06:41.950855 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:06:41.950859 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:06:41.950863 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:06:41.950867 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:06:41.950870 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:06:41.950874 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:06:41.950878 | orchestrator | 2026-03-17 01:06:41.950882 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-03-17 01:06:41.950885 | orchestrator | Tuesday 17 March 2026 01:04:10 +0000 (0:00:02.706) 0:01:34.902 ********* 2026-03-17 01:06:41.950889 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-17 01:06:41.950893 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:06:41.950897 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-17 01:06:41.950901 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:06:41.950904 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-17 01:06:41.950908 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:06:41.950913 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-17 01:06:41.950917 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:06:41.950920 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-17 01:06:41.950928 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:06:41.950932 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-17 01:06:41.950936 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:06:41.950939 | orchestrator | 2026-03-17 01:06:41.950943 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-03-17 01:06:41.950947 | orchestrator | Tuesday 17 March 2026 01:04:11 +0000 (0:00:01.789) 0:01:36.692 ********* 2026-03-17 01:06:41.950951 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-17 01:06:41.950963 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:06:41.950974 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-17 01:06:41.950978 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:06:41.950986 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-17 01:06:41.951047 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:06:41.951061 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-17 01:06:41.951070 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:06:41.951074 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-17 01:06:41.951078 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:06:41.951082 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-17 01:06:41.951086 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:06:41.951089 | orchestrator | 2026-03-17 01:06:41.951094 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-03-17 01:06:41.951098 | orchestrator | Tuesday 17 March 2026 01:04:13 +0000 (0:00:01.728) 0:01:38.420 ********* 2026-03-17 01:06:41.951104 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-17 01:06:41.951108 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:06:41.951116 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-17 01:06:41.951123 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:06:41.951127 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-17 01:06:41.951131 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:06:41.951135 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-17 01:06:41.951139 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:06:41.951143 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-17 01:06:41.951149 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:06:41.951153 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-17 01:06:41.951157 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:06:41.951161 | orchestrator | 2026-03-17 01:06:41.951165 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-03-17 01:06:41.951169 | orchestrator | Tuesday 17 March 2026 01:04:15 +0000 (0:00:01.812) 0:01:40.232 ********* 2026-03-17 01:06:41.951173 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:06:41.951179 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:06:41.951183 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:06:41.951187 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:06:41.951195 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:06:41.951208 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:06:41.951219 | orchestrator | 2026-03-17 01:06:41.951225 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-03-17 01:06:41.951231 | orchestrator | Tuesday 17 March 2026 01:04:17 +0000 (0:00:02.265) 0:01:42.498 ********* 2026-03-17 01:06:41.951237 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:06:41.951243 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:06:41.951249 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:06:41.951255 | orchestrator | changed: [testbed-node-3] 2026-03-17 01:06:41.951261 | orchestrator | changed: [testbed-node-4] 2026-03-17 01:06:41.951267 | orchestrator | changed: [testbed-node-5] 2026-03-17 01:06:41.951273 | orchestrator | 2026-03-17 01:06:41.951279 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-03-17 01:06:41.951286 | orchestrator | Tuesday 17 March 2026 01:04:21 +0000 (0:00:03.867) 0:01:46.366 ********* 2026-03-17 01:06:41.951292 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:06:41.951297 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:06:41.951304 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:06:41.951310 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:06:41.951316 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:06:41.951323 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:06:41.951329 | orchestrator | 2026-03-17 01:06:41.951336 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-03-17 01:06:41.951341 | orchestrator | Tuesday 17 March 2026 01:04:24 +0000 (0:00:02.965) 0:01:49.332 ********* 2026-03-17 01:06:41.951344 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:06:41.951348 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:06:41.951352 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:06:41.951356 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:06:41.951360 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:06:41.951364 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:06:41.951367 | orchestrator | 2026-03-17 01:06:41.951371 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-03-17 01:06:41.951375 | orchestrator | Tuesday 17 March 2026 01:04:27 +0000 (0:00:02.445) 0:01:51.778 ********* 2026-03-17 01:06:41.951379 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:06:41.951383 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:06:41.951386 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:06:41.951390 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:06:41.951394 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:06:41.951397 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:06:41.951401 | orchestrator | 2026-03-17 01:06:41.951405 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-03-17 01:06:41.951409 | orchestrator | Tuesday 17 March 2026 01:04:29 +0000 (0:00:02.495) 0:01:54.273 ********* 2026-03-17 01:06:41.951413 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:06:41.951417 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:06:41.951421 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:06:41.951424 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:06:41.951428 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:06:41.951432 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:06:41.951436 | orchestrator | 2026-03-17 01:06:41.951440 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-03-17 01:06:41.951443 | orchestrator | Tuesday 17 March 2026 01:04:31 +0000 (0:00:01.788) 0:01:56.062 ********* 2026-03-17 01:06:41.951447 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:06:41.951451 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:06:41.951455 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:06:41.951459 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:06:41.951596 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:06:41.951601 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:06:41.951605 | orchestrator | 2026-03-17 01:06:41.951615 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-03-17 01:06:41.951619 | orchestrator | Tuesday 17 March 2026 01:04:33 +0000 (0:00:02.077) 0:01:58.140 ********* 2026-03-17 01:06:41.951623 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:06:41.951627 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:06:41.951630 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:06:41.951634 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:06:41.951638 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:06:41.951642 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:06:41.951646 | orchestrator | 2026-03-17 01:06:41.951649 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-03-17 01:06:41.951653 | orchestrator | Tuesday 17 March 2026 01:04:36 +0000 (0:00:02.824) 0:02:00.964 ********* 2026-03-17 01:06:41.951660 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:06:41.951664 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:06:41.951668 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:06:41.951672 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:06:41.951675 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:06:41.951679 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:06:41.951683 | orchestrator | 2026-03-17 01:06:41.951687 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-03-17 01:06:41.951692 | orchestrator | Tuesday 17 March 2026 01:04:39 +0000 (0:00:02.932) 0:02:03.897 ********* 2026-03-17 01:06:41.951696 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-17 01:06:41.951702 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:06:41.951706 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-17 01:06:41.951710 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-17 01:06:41.951715 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:06:41.951719 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:06:41.951723 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-17 01:06:41.951728 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:06:41.951739 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-17 01:06:41.951744 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:06:41.951748 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-17 01:06:41.951753 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:06:41.951757 | orchestrator | 2026-03-17 01:06:41.951762 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-03-17 01:06:41.951766 | orchestrator | Tuesday 17 March 2026 01:04:41 +0000 (0:00:02.033) 0:02:05.930 ********* 2026-03-17 01:06:41.951772 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-17 01:06:41.951779 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:06:41.951783 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-17 01:06:41.951799 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:06:41.951810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-17 01:06:41.951815 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:06:41.951820 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-17 01:06:41.951825 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:06:41.951833 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-17 01:06:41.951837 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:06:41.951842 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-17 01:06:41.951849 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:06:41.951854 | orchestrator | 2026-03-17 01:06:41.951858 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2026-03-17 01:06:41.951863 | orchestrator | Tuesday 17 March 2026 01:04:44 +0000 (0:00:03.158) 0:02:09.088 ********* 2026-03-17 01:06:41.951867 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-17 01:06:41.951874 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-17 01:06:41.951882 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-17 01:06:41.951887 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-17 01:06:41.951895 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-17 01:06:41.951900 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-17 01:06:41.951904 | orchestrator | 2026-03-17 01:06:41.951909 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-17 01:06:41.951914 | orchestrator | Tuesday 17 March 2026 01:04:46 +0000 (0:00:02.534) 0:02:11.623 ********* 2026-03-17 01:06:41.951918 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:06:41.951923 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:06:41.951927 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:06:41.951932 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:06:41.951936 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:06:41.951941 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:06:41.951945 | orchestrator | 2026-03-17 01:06:41.951950 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2026-03-17 01:06:41.951954 | orchestrator | Tuesday 17 March 2026 01:04:47 +0000 (0:00:00.578) 0:02:12.202 ********* 2026-03-17 01:06:41.951958 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:06:41.951963 | orchestrator | 2026-03-17 01:06:41.951972 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2026-03-17 01:06:41.951976 | orchestrator | Tuesday 17 March 2026 01:04:49 +0000 (0:00:01.993) 0:02:14.195 ********* 2026-03-17 01:06:41.951981 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:06:41.951985 | orchestrator | 2026-03-17 01:06:41.951990 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2026-03-17 01:06:41.951994 | orchestrator | Tuesday 17 March 2026 01:04:51 +0000 (0:00:02.101) 0:02:16.297 ********* 2026-03-17 01:06:41.951998 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:06:41.952003 | orchestrator | 2026-03-17 01:06:41.952007 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-17 01:06:41.952012 | orchestrator | Tuesday 17 March 2026 01:05:27 +0000 (0:00:36.103) 0:02:52.400 ********* 2026-03-17 01:06:41.952016 | orchestrator | 2026-03-17 01:06:41.952020 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-17 01:06:41.952025 | orchestrator | Tuesday 17 March 2026 01:05:27 +0000 (0:00:00.121) 0:02:52.522 ********* 2026-03-17 01:06:41.952029 | orchestrator | 2026-03-17 01:06:41.952033 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-17 01:06:41.952038 | orchestrator | Tuesday 17 March 2026 01:05:27 +0000 (0:00:00.121) 0:02:52.643 ********* 2026-03-17 01:06:41.952042 | orchestrator | 2026-03-17 01:06:41.952046 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-17 01:06:41.952051 | orchestrator | Tuesday 17 March 2026 01:05:28 +0000 (0:00:00.199) 0:02:52.843 ********* 2026-03-17 01:06:41.952058 | orchestrator | 2026-03-17 01:06:41.952065 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-17 01:06:41.952070 | orchestrator | Tuesday 17 March 2026 01:05:28 +0000 (0:00:00.214) 0:02:53.057 ********* 2026-03-17 01:06:41.952075 | orchestrator | 2026-03-17 01:06:41.952084 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-17 01:06:41.952088 | orchestrator | Tuesday 17 March 2026 01:05:28 +0000 (0:00:00.195) 0:02:53.253 ********* 2026-03-17 01:06:41.952093 | orchestrator | 2026-03-17 01:06:41.952097 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-03-17 01:06:41.952102 | orchestrator | Tuesday 17 March 2026 01:05:28 +0000 (0:00:00.196) 0:02:53.449 ********* 2026-03-17 01:06:41.952106 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:06:41.952111 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:06:41.952115 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:06:41.952119 | orchestrator | 2026-03-17 01:06:41.952122 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-03-17 01:06:41.952126 | orchestrator | Tuesday 17 March 2026 01:05:56 +0000 (0:00:28.158) 0:03:21.608 ********* 2026-03-17 01:06:41.952130 | orchestrator | changed: [testbed-node-4] 2026-03-17 01:06:41.952134 | orchestrator | changed: [testbed-node-3] 2026-03-17 01:06:41.952138 | orchestrator | changed: [testbed-node-5] 2026-03-17 01:06:41.952141 | orchestrator | 2026-03-17 01:06:41.952145 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 01:06:41.952149 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-17 01:06:41.952154 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-03-17 01:06:41.952158 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-03-17 01:06:41.952162 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-17 01:06:41.952166 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-17 01:06:41.952169 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-17 01:06:41.952173 | orchestrator | 2026-03-17 01:06:41.952177 | orchestrator | 2026-03-17 01:06:41.952181 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 01:06:41.952185 | orchestrator | Tuesday 17 March 2026 01:06:39 +0000 (0:00:42.267) 0:04:03.876 ********* 2026-03-17 01:06:41.952189 | orchestrator | =============================================================================== 2026-03-17 01:06:41.952192 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 42.27s 2026-03-17 01:06:41.952196 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 36.10s 2026-03-17 01:06:41.952200 | orchestrator | neutron : Restart neutron-server container ----------------------------- 28.16s 2026-03-17 01:06:41.952204 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 8.81s 2026-03-17 01:06:41.952208 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 7.05s 2026-03-17 01:06:41.952211 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.81s 2026-03-17 01:06:41.952215 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 4.62s 2026-03-17 01:06:41.952219 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 4.22s 2026-03-17 01:06:41.952223 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 4.15s 2026-03-17 01:06:41.952229 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 3.87s 2026-03-17 01:06:41.952233 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 3.81s 2026-03-17 01:06:41.952244 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.39s 2026-03-17 01:06:41.952248 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.27s 2026-03-17 01:06:41.952256 | orchestrator | neutron : Copying over config.json files for services ------------------- 3.21s 2026-03-17 01:06:41.952260 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 3.17s 2026-03-17 01:06:41.952264 | orchestrator | neutron : Copying over neutron_taas.conf -------------------------------- 3.16s 2026-03-17 01:06:41.952268 | orchestrator | neutron : Copying over metering_agent.ini ------------------------------- 2.97s 2026-03-17 01:06:41.952271 | orchestrator | neutron : Copying over extra ml2 plugins -------------------------------- 2.93s 2026-03-17 01:06:41.952275 | orchestrator | neutron : Copy neutron-l3-agent-wrapper script -------------------------- 2.82s 2026-03-17 01:06:41.952279 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 2.81s 2026-03-17 01:06:41.952283 | orchestrator | 2026-03-17 01:06:41 | INFO  | Task 9b54a0bb-ea6a-4470-b27f-ffd98f9917ec is in state STARTED 2026-03-17 01:06:41.952287 | orchestrator | 2026-03-17 01:06:41 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:06:41.952291 | orchestrator | 2026-03-17 01:06:41 | INFO  | Task 13e17ca2-022e-4adb-aab8-c9087b7d146a is in state STARTED 2026-03-17 01:06:41.952441 | orchestrator | 2026-03-17 01:06:41 | INFO  | Task 0c747add-54b8-407e-aa40-bfb32ea89c30 is in state STARTED 2026-03-17 01:06:41.952453 | orchestrator | 2026-03-17 01:06:41 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:06:44.992610 | orchestrator | 2026-03-17 01:06:44 | INFO  | Task 9b54a0bb-ea6a-4470-b27f-ffd98f9917ec is in state STARTED 2026-03-17 01:06:44.992928 | orchestrator | 2026-03-17 01:06:44 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:06:44.995011 | orchestrator | 2026-03-17 01:06:44 | INFO  | Task 13e17ca2-022e-4adb-aab8-c9087b7d146a is in state STARTED 2026-03-17 01:06:44.996037 | orchestrator | 2026-03-17 01:06:44 | INFO  | Task 0c747add-54b8-407e-aa40-bfb32ea89c30 is in state STARTED 2026-03-17 01:06:44.996524 | orchestrator | 2026-03-17 01:06:44 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:06:48.099347 | orchestrator | 2026-03-17 01:06:48 | INFO  | Task 9b54a0bb-ea6a-4470-b27f-ffd98f9917ec is in state STARTED 2026-03-17 01:06:48.100135 | orchestrator | 2026-03-17 01:06:48 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:06:48.100522 | orchestrator | 2026-03-17 01:06:48 | INFO  | Task 13e17ca2-022e-4adb-aab8-c9087b7d146a is in state STARTED 2026-03-17 01:06:48.101446 | orchestrator | 2026-03-17 01:06:48 | INFO  | Task 0c747add-54b8-407e-aa40-bfb32ea89c30 is in state STARTED 2026-03-17 01:06:48.101469 | orchestrator | 2026-03-17 01:06:48 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:06:51.139432 | orchestrator | 2026-03-17 01:06:51 | INFO  | Task 9b54a0bb-ea6a-4470-b27f-ffd98f9917ec is in state STARTED 2026-03-17 01:06:51.142231 | orchestrator | 2026-03-17 01:06:51 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:06:51.145706 | orchestrator | 2026-03-17 01:06:51 | INFO  | Task 13e17ca2-022e-4adb-aab8-c9087b7d146a is in state STARTED 2026-03-17 01:06:51.147218 | orchestrator | 2026-03-17 01:06:51 | INFO  | Task 0c747add-54b8-407e-aa40-bfb32ea89c30 is in state STARTED 2026-03-17 01:06:51.147266 | orchestrator | 2026-03-17 01:06:51 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:06:54.184778 | orchestrator | 2026-03-17 01:06:54 | INFO  | Task 9b54a0bb-ea6a-4470-b27f-ffd98f9917ec is in state STARTED 2026-03-17 01:06:54.184852 | orchestrator | 2026-03-17 01:06:54 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:06:54.185403 | orchestrator | 2026-03-17 01:06:54 | INFO  | Task 13e17ca2-022e-4adb-aab8-c9087b7d146a is in state STARTED 2026-03-17 01:06:54.186128 | orchestrator | 2026-03-17 01:06:54 | INFO  | Task 0c747add-54b8-407e-aa40-bfb32ea89c30 is in state STARTED 2026-03-17 01:06:54.186152 | orchestrator | 2026-03-17 01:06:54 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:06:57.210542 | orchestrator | 2026-03-17 01:06:57 | INFO  | Task 9b54a0bb-ea6a-4470-b27f-ffd98f9917ec is in state STARTED 2026-03-17 01:06:57.210929 | orchestrator | 2026-03-17 01:06:57 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:06:57.211302 | orchestrator | 2026-03-17 01:06:57 | INFO  | Task 13e17ca2-022e-4adb-aab8-c9087b7d146a is in state STARTED 2026-03-17 01:06:57.211788 | orchestrator | 2026-03-17 01:06:57 | INFO  | Task 0c747add-54b8-407e-aa40-bfb32ea89c30 is in state STARTED 2026-03-17 01:06:57.211839 | orchestrator | 2026-03-17 01:06:57 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:07:00.230730 | orchestrator | 2026-03-17 01:07:00 | INFO  | Task 9b54a0bb-ea6a-4470-b27f-ffd98f9917ec is in state STARTED 2026-03-17 01:07:00.230908 | orchestrator | 2026-03-17 01:07:00 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:07:00.232952 | orchestrator | 2026-03-17 01:07:00 | INFO  | Task 13e17ca2-022e-4adb-aab8-c9087b7d146a is in state STARTED 2026-03-17 01:07:00.233703 | orchestrator | 2026-03-17 01:07:00 | INFO  | Task 0c747add-54b8-407e-aa40-bfb32ea89c30 is in state STARTED 2026-03-17 01:07:00.233756 | orchestrator | 2026-03-17 01:07:00 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:07:03.271778 | orchestrator | 2026-03-17 01:07:03 | INFO  | Task 9b54a0bb-ea6a-4470-b27f-ffd98f9917ec is in state STARTED 2026-03-17 01:07:03.272267 | orchestrator | 2026-03-17 01:07:03 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:07:03.273164 | orchestrator | 2026-03-17 01:07:03 | INFO  | Task 13e17ca2-022e-4adb-aab8-c9087b7d146a is in state STARTED 2026-03-17 01:07:03.273873 | orchestrator | 2026-03-17 01:07:03 | INFO  | Task 0c747add-54b8-407e-aa40-bfb32ea89c30 is in state STARTED 2026-03-17 01:07:03.273986 | orchestrator | 2026-03-17 01:07:03 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:07:06.306796 | orchestrator | 2026-03-17 01:07:06 | INFO  | Task 9b54a0bb-ea6a-4470-b27f-ffd98f9917ec is in state STARTED 2026-03-17 01:07:06.307268 | orchestrator | 2026-03-17 01:07:06 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:07:06.307881 | orchestrator | 2026-03-17 01:07:06 | INFO  | Task 13e17ca2-022e-4adb-aab8-c9087b7d146a is in state STARTED 2026-03-17 01:07:06.308615 | orchestrator | 2026-03-17 01:07:06 | INFO  | Task 0c747add-54b8-407e-aa40-bfb32ea89c30 is in state STARTED 2026-03-17 01:07:06.308685 | orchestrator | 2026-03-17 01:07:06 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:07:09.332848 | orchestrator | 2026-03-17 01:07:09 | INFO  | Task 9b54a0bb-ea6a-4470-b27f-ffd98f9917ec is in state STARTED 2026-03-17 01:07:09.333323 | orchestrator | 2026-03-17 01:07:09 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:07:09.333959 | orchestrator | 2026-03-17 01:07:09 | INFO  | Task 13e17ca2-022e-4adb-aab8-c9087b7d146a is in state STARTED 2026-03-17 01:07:09.334738 | orchestrator | 2026-03-17 01:07:09 | INFO  | Task 0c747add-54b8-407e-aa40-bfb32ea89c30 is in state STARTED 2026-03-17 01:07:09.334785 | orchestrator | 2026-03-17 01:07:09 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:07:12.409606 | orchestrator | 2026-03-17 01:07:12 | INFO  | Task 9b54a0bb-ea6a-4470-b27f-ffd98f9917ec is in state STARTED 2026-03-17 01:07:12.409839 | orchestrator | 2026-03-17 01:07:12 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:07:12.410711 | orchestrator | 2026-03-17 01:07:12 | INFO  | Task 13e17ca2-022e-4adb-aab8-c9087b7d146a is in state STARTED 2026-03-17 01:07:12.412770 | orchestrator | 2026-03-17 01:07:12 | INFO  | Task 0c747add-54b8-407e-aa40-bfb32ea89c30 is in state STARTED 2026-03-17 01:07:12.412797 | orchestrator | 2026-03-17 01:07:12 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:07:15.445736 | orchestrator | 2026-03-17 01:07:15 | INFO  | Task 9b54a0bb-ea6a-4470-b27f-ffd98f9917ec is in state STARTED 2026-03-17 01:07:15.446798 | orchestrator | 2026-03-17 01:07:15 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:07:15.447687 | orchestrator | 2026-03-17 01:07:15 | INFO  | Task 13e17ca2-022e-4adb-aab8-c9087b7d146a is in state STARTED 2026-03-17 01:07:15.448778 | orchestrator | 2026-03-17 01:07:15 | INFO  | Task 0c747add-54b8-407e-aa40-bfb32ea89c30 is in state STARTED 2026-03-17 01:07:15.448916 | orchestrator | 2026-03-17 01:07:15 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:07:18.479732 | orchestrator | 2026-03-17 01:07:18 | INFO  | Task 9b54a0bb-ea6a-4470-b27f-ffd98f9917ec is in state STARTED 2026-03-17 01:07:18.480264 | orchestrator | 2026-03-17 01:07:18 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:07:18.480963 | orchestrator | 2026-03-17 01:07:18 | INFO  | Task 13e17ca2-022e-4adb-aab8-c9087b7d146a is in state STARTED 2026-03-17 01:07:18.481757 | orchestrator | 2026-03-17 01:07:18 | INFO  | Task 0c747add-54b8-407e-aa40-bfb32ea89c30 is in state STARTED 2026-03-17 01:07:18.481814 | orchestrator | 2026-03-17 01:07:18 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:07:21.525252 | orchestrator | 2026-03-17 01:07:21 | INFO  | Task 9b54a0bb-ea6a-4470-b27f-ffd98f9917ec is in state STARTED 2026-03-17 01:07:21.525310 | orchestrator | 2026-03-17 01:07:21 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:07:21.525936 | orchestrator | 2026-03-17 01:07:21 | INFO  | Task 13e17ca2-022e-4adb-aab8-c9087b7d146a is in state STARTED 2026-03-17 01:07:21.527318 | orchestrator | 2026-03-17 01:07:21 | INFO  | Task 0c747add-54b8-407e-aa40-bfb32ea89c30 is in state STARTED 2026-03-17 01:07:21.527353 | orchestrator | 2026-03-17 01:07:21 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:07:24.563494 | orchestrator | 2026-03-17 01:07:24 | INFO  | Task 9b54a0bb-ea6a-4470-b27f-ffd98f9917ec is in state STARTED 2026-03-17 01:07:24.566241 | orchestrator | 2026-03-17 01:07:24 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:07:24.566942 | orchestrator | 2026-03-17 01:07:24 | INFO  | Task 13e17ca2-022e-4adb-aab8-c9087b7d146a is in state STARTED 2026-03-17 01:07:24.567803 | orchestrator | 2026-03-17 01:07:24 | INFO  | Task 0c747add-54b8-407e-aa40-bfb32ea89c30 is in state STARTED 2026-03-17 01:07:24.567833 | orchestrator | 2026-03-17 01:07:24 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:07:27.591661 | orchestrator | 2026-03-17 01:07:27 | INFO  | Task 9b54a0bb-ea6a-4470-b27f-ffd98f9917ec is in state STARTED 2026-03-17 01:07:27.591711 | orchestrator | 2026-03-17 01:07:27 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:07:27.592350 | orchestrator | 2026-03-17 01:07:27 | INFO  | Task 13e17ca2-022e-4adb-aab8-c9087b7d146a is in state STARTED 2026-03-17 01:07:27.593127 | orchestrator | 2026-03-17 01:07:27 | INFO  | Task 0c747add-54b8-407e-aa40-bfb32ea89c30 is in state STARTED 2026-03-17 01:07:27.593159 | orchestrator | 2026-03-17 01:07:27 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:07:30.620520 | orchestrator | 2026-03-17 01:07:30 | INFO  | Task 9b54a0bb-ea6a-4470-b27f-ffd98f9917ec is in state STARTED 2026-03-17 01:07:30.620788 | orchestrator | 2026-03-17 01:07:30 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:07:30.621837 | orchestrator | 2026-03-17 01:07:30 | INFO  | Task 13e17ca2-022e-4adb-aab8-c9087b7d146a is in state STARTED 2026-03-17 01:07:30.622702 | orchestrator | 2026-03-17 01:07:30 | INFO  | Task 0c747add-54b8-407e-aa40-bfb32ea89c30 is in state STARTED 2026-03-17 01:07:30.622766 | orchestrator | 2026-03-17 01:07:30 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:07:33.709192 | orchestrator | 2026-03-17 01:07:33 | INFO  | Task 9b54a0bb-ea6a-4470-b27f-ffd98f9917ec is in state STARTED 2026-03-17 01:07:33.709253 | orchestrator | 2026-03-17 01:07:33 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:07:33.709289 | orchestrator | 2026-03-17 01:07:33 | INFO  | Task 13e17ca2-022e-4adb-aab8-c9087b7d146a is in state STARTED 2026-03-17 01:07:33.709297 | orchestrator | 2026-03-17 01:07:33 | INFO  | Task 0c747add-54b8-407e-aa40-bfb32ea89c30 is in state STARTED 2026-03-17 01:07:33.709304 | orchestrator | 2026-03-17 01:07:33 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:07:36.702947 | orchestrator | 2026-03-17 01:07:36 | INFO  | Task 9b54a0bb-ea6a-4470-b27f-ffd98f9917ec is in state STARTED 2026-03-17 01:07:36.704880 | orchestrator | 2026-03-17 01:07:36 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:07:36.705015 | orchestrator | 2026-03-17 01:07:36 | INFO  | Task 13e17ca2-022e-4adb-aab8-c9087b7d146a is in state STARTED 2026-03-17 01:07:36.705771 | orchestrator | 2026-03-17 01:07:36 | INFO  | Task 0c747add-54b8-407e-aa40-bfb32ea89c30 is in state STARTED 2026-03-17 01:07:36.705792 | orchestrator | 2026-03-17 01:07:36 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:07:39.737796 | orchestrator | 2026-03-17 01:07:39 | INFO  | Task 9b54a0bb-ea6a-4470-b27f-ffd98f9917ec is in state STARTED 2026-03-17 01:07:39.738734 | orchestrator | 2026-03-17 01:07:39 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:07:39.741547 | orchestrator | 2026-03-17 01:07:39 | INFO  | Task 13e17ca2-022e-4adb-aab8-c9087b7d146a is in state STARTED 2026-03-17 01:07:39.742215 | orchestrator | 2026-03-17 01:07:39 | INFO  | Task 0c747add-54b8-407e-aa40-bfb32ea89c30 is in state STARTED 2026-03-17 01:07:39.742233 | orchestrator | 2026-03-17 01:07:39 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:07:42.781876 | orchestrator | 2026-03-17 01:07:42 | INFO  | Task ff7a6dbd-afe2-4338-b6a8-373507d5f512 is in state STARTED 2026-03-17 01:07:42.784571 | orchestrator | 2026-03-17 01:07:42 | INFO  | Task 9b54a0bb-ea6a-4470-b27f-ffd98f9917ec is in state SUCCESS 2026-03-17 01:07:42.785997 | orchestrator | 2026-03-17 01:07:42.786106 | orchestrator | 2026-03-17 01:07:42.786116 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-17 01:07:42.786121 | orchestrator | 2026-03-17 01:07:42.786125 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-17 01:07:42.786129 | orchestrator | Tuesday 17 March 2026 01:06:10 +0000 (0:00:00.306) 0:00:00.306 ********* 2026-03-17 01:07:42.786132 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:07:42.786147 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:07:42.786151 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:07:42.786154 | orchestrator | 2026-03-17 01:07:42.786158 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-17 01:07:42.786161 | orchestrator | Tuesday 17 March 2026 01:06:11 +0000 (0:00:00.262) 0:00:00.568 ********* 2026-03-17 01:07:42.786164 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-03-17 01:07:42.786168 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-03-17 01:07:42.786172 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-03-17 01:07:42.786175 | orchestrator | 2026-03-17 01:07:42.786178 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-03-17 01:07:42.786182 | orchestrator | 2026-03-17 01:07:42.786191 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-17 01:07:42.786199 | orchestrator | Tuesday 17 March 2026 01:06:11 +0000 (0:00:00.225) 0:00:00.794 ********* 2026-03-17 01:07:42.786205 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:07:42.786214 | orchestrator | 2026-03-17 01:07:42.786221 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2026-03-17 01:07:42.786227 | orchestrator | Tuesday 17 March 2026 01:06:11 +0000 (0:00:00.448) 0:00:01.243 ********* 2026-03-17 01:07:42.786232 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2026-03-17 01:07:42.786238 | orchestrator | 2026-03-17 01:07:42.786244 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2026-03-17 01:07:42.786249 | orchestrator | Tuesday 17 March 2026 01:06:15 +0000 (0:00:03.645) 0:00:04.889 ********* 2026-03-17 01:07:42.786254 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2026-03-17 01:07:42.786260 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2026-03-17 01:07:42.786265 | orchestrator | 2026-03-17 01:07:42.786271 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2026-03-17 01:07:42.786277 | orchestrator | Tuesday 17 March 2026 01:06:21 +0000 (0:00:06.247) 0:00:11.137 ********* 2026-03-17 01:07:42.786283 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-17 01:07:42.786290 | orchestrator | 2026-03-17 01:07:42.786294 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2026-03-17 01:07:42.786297 | orchestrator | Tuesday 17 March 2026 01:06:24 +0000 (0:00:03.302) 0:00:14.439 ********* 2026-03-17 01:07:42.786300 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2026-03-17 01:07:42.786304 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-17 01:07:42.786307 | orchestrator | 2026-03-17 01:07:42.786311 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2026-03-17 01:07:42.786314 | orchestrator | Tuesday 17 March 2026 01:06:28 +0000 (0:00:03.740) 0:00:18.179 ********* 2026-03-17 01:07:42.786317 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-17 01:07:42.786321 | orchestrator | 2026-03-17 01:07:42.786324 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2026-03-17 01:07:42.786328 | orchestrator | Tuesday 17 March 2026 01:06:31 +0000 (0:00:02.875) 0:00:21.055 ********* 2026-03-17 01:07:42.786331 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2026-03-17 01:07:42.786334 | orchestrator | 2026-03-17 01:07:42.786338 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-03-17 01:07:42.786341 | orchestrator | Tuesday 17 March 2026 01:06:34 +0000 (0:00:03.298) 0:00:24.354 ********* 2026-03-17 01:07:42.786367 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-17 01:07:42.786419 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-17 01:07:42.786424 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-17 01:07:42.786434 | orchestrator | 2026-03-17 01:07:42.786438 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-17 01:07:42.786442 | orchestrator | Tuesday 17 March 2026 01:06:38 +0000 (0:00:03.403) 0:00:27.757 ********* 2026-03-17 01:07:42.786447 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:07:42.786451 | orchestrator | 2026-03-17 01:07:42.786454 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-03-17 01:07:42.786462 | orchestrator | Tuesday 17 March 2026 01:06:38 +0000 (0:00:00.683) 0:00:28.441 ********* 2026-03-17 01:07:42.786466 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:07:42.786469 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:07:42.786472 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:07:42.786476 | orchestrator | 2026-03-17 01:07:42.786479 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-03-17 01:07:42.786482 | orchestrator | Tuesday 17 March 2026 01:06:43 +0000 (0:00:04.472) 0:00:32.913 ********* 2026-03-17 01:07:42.786486 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-17 01:07:42.786489 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-17 01:07:42.786493 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-17 01:07:42.786496 | orchestrator | 2026-03-17 01:07:42.786499 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-03-17 01:07:42.786503 | orchestrator | Tuesday 17 March 2026 01:06:45 +0000 (0:00:01.678) 0:00:34.591 ********* 2026-03-17 01:07:42.786506 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-17 01:07:42.786509 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-17 01:07:42.786513 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-17 01:07:42.786516 | orchestrator | 2026-03-17 01:07:42.786519 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-03-17 01:07:42.786523 | orchestrator | Tuesday 17 March 2026 01:06:46 +0000 (0:00:01.151) 0:00:35.743 ********* 2026-03-17 01:07:42.786526 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:07:42.786529 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:07:42.786533 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:07:42.786536 | orchestrator | 2026-03-17 01:07:42.786539 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-03-17 01:07:42.786543 | orchestrator | Tuesday 17 March 2026 01:06:46 +0000 (0:00:00.637) 0:00:36.380 ********* 2026-03-17 01:07:42.786546 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:07:42.786549 | orchestrator | 2026-03-17 01:07:42.786553 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-03-17 01:07:42.786556 | orchestrator | Tuesday 17 March 2026 01:06:46 +0000 (0:00:00.130) 0:00:36.510 ********* 2026-03-17 01:07:42.786559 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:07:42.786563 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:07:42.786566 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:07:42.786569 | orchestrator | 2026-03-17 01:07:42.786575 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-17 01:07:42.786579 | orchestrator | Tuesday 17 March 2026 01:06:47 +0000 (0:00:00.253) 0:00:36.764 ********* 2026-03-17 01:07:42.786582 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:07:42.786586 | orchestrator | 2026-03-17 01:07:42.786589 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-03-17 01:07:42.786592 | orchestrator | Tuesday 17 March 2026 01:06:47 +0000 (0:00:00.527) 0:00:37.292 ********* 2026-03-17 01:07:42.786598 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-17 01:07:42.786605 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-17 01:07:42.786609 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-17 01:07:42.786615 | orchestrator | 2026-03-17 01:07:42.786618 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-03-17 01:07:42.786622 | orchestrator | Tuesday 17 March 2026 01:06:51 +0000 (0:00:03.786) 0:00:41.078 ********* 2026-03-17 01:07:42.786633 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-17 01:07:42.786638 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:07:42.786641 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-17 01:07:42.786647 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:07:42.786655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-17 01:07:42.786659 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:07:42.786663 | orchestrator | 2026-03-17 01:07:42.786666 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-03-17 01:07:42.786669 | orchestrator | Tuesday 17 March 2026 01:06:55 +0000 (0:00:04.034) 0:00:45.113 ********* 2026-03-17 01:07:42.786673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-17 01:07:42.786679 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:07:42.786688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-17 01:07:42.786697 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:07:42.786709 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-17 01:07:42.786718 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:07:42.786724 | orchestrator | 2026-03-17 01:07:42.786729 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-03-17 01:07:42.786735 | orchestrator | Tuesday 17 March 2026 01:07:01 +0000 (0:00:05.801) 0:00:50.915 ********* 2026-03-17 01:07:42.786741 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:07:42.786746 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:07:42.786752 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:07:42.786757 | orchestrator | 2026-03-17 01:07:42.786763 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-03-17 01:07:42.786770 | orchestrator | Tuesday 17 March 2026 01:07:05 +0000 (0:00:03.909) 0:00:54.825 ********* 2026-03-17 01:07:42.786776 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-17 01:07:42.786787 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-17 01:07:42.786795 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-17 01:07:42.786799 | orchestrator | 2026-03-17 01:07:42.786803 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-03-17 01:07:42.786873 | orchestrator | Tuesday 17 March 2026 01:07:08 +0000 (0:00:03.019) 0:00:57.845 ********* 2026-03-17 01:07:42.786882 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:07:42.786888 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:07:42.786894 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:07:42.786899 | orchestrator | 2026-03-17 01:07:42.786905 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-03-17 01:07:42.786911 | orchestrator | Tuesday 17 March 2026 01:07:14 +0000 (0:00:06.261) 0:01:04.106 ********* 2026-03-17 01:07:42.786917 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:07:42.786923 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:07:42.786928 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:07:42.786934 | orchestrator | 2026-03-17 01:07:42.786939 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2026-03-17 01:07:42.786945 | orchestrator | Tuesday 17 March 2026 01:07:17 +0000 (0:00:03.159) 0:01:07.265 ********* 2026-03-17 01:07:42.786951 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:07:42.786956 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:07:42.786962 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:07:42.786968 | orchestrator | 2026-03-17 01:07:42.786973 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-03-17 01:07:42.786982 | orchestrator | Tuesday 17 March 2026 01:07:22 +0000 (0:00:04.317) 0:01:11.583 ********* 2026-03-17 01:07:42.786988 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:07:42.786994 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:07:42.787009 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:07:42.787015 | orchestrator | 2026-03-17 01:07:42.787021 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-03-17 01:07:42.787031 | orchestrator | Tuesday 17 March 2026 01:07:25 +0000 (0:00:03.524) 0:01:15.108 ********* 2026-03-17 01:07:42.787037 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:07:42.787043 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:07:42.787049 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:07:42.787054 | orchestrator | 2026-03-17 01:07:42.787061 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-03-17 01:07:42.787067 | orchestrator | Tuesday 17 March 2026 01:07:30 +0000 (0:00:04.523) 0:01:19.631 ********* 2026-03-17 01:07:42.787072 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:07:42.787078 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:07:42.787085 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:07:42.787093 | orchestrator | 2026-03-17 01:07:42.787100 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-03-17 01:07:42.787107 | orchestrator | Tuesday 17 March 2026 01:07:31 +0000 (0:00:01.114) 0:01:20.746 ********* 2026-03-17 01:07:42.787112 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-17 01:07:42.787117 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:07:42.787122 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-17 01:07:42.787127 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:07:42.787133 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-17 01:07:42.787138 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:07:42.787143 | orchestrator | 2026-03-17 01:07:42.787148 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-03-17 01:07:42.787154 | orchestrator | Tuesday 17 March 2026 01:07:36 +0000 (0:00:05.289) 0:01:26.036 ********* 2026-03-17 01:07:42.787160 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"msg": "The conditional check 'glance_backend_nvme | default(false) | bool)' failed. The error was: template error while templating string: unexpected ')'. String: {% if glance_backend_nvme | default(false) | bool) %} True {% else %} False {% endif %}. unexpected ')'\n\nThe error appears to be in '/ansible/roles/glance/tasks/config.yml': line 172, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: Generating 'hostnqn' file for glance_api\n ^ here\n"} 2026-03-17 01:07:42.787167 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"msg": "The conditional check 'glance_backend_nvme | default(false) | bool)' failed. The error was: template error while templating string: unexpected ')'. String: {% if glance_backend_nvme | default(false) | bool) %} True {% else %} False {% endif %}. unexpected ')'\n\nThe error appears to be in '/ansible/roles/glance/tasks/config.yml': line 172, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: Generating 'hostnqn' file for glance_api\n ^ here\n"} 2026-03-17 01:07:42.787173 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"msg": "The conditional check 'glance_backend_nvme | default(false) | bool)' failed. The error was: template error while templating string: unexpected ')'. String: {% if glance_backend_nvme | default(false) | bool) %} True {% else %} False {% endif %}. unexpected ')'\n\nThe error appears to be in '/ansible/roles/glance/tasks/config.yml': line 172, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: Generating 'hostnqn' file for glance_api\n ^ here\n"} 2026-03-17 01:07:42.787179 | orchestrator | 2026-03-17 01:07:42.787184 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 01:07:42.787191 | orchestrator | testbed-node-0 : ok=19  changed=11  unreachable=0 failed=1  skipped=11  rescued=0 ignored=0 2026-03-17 01:07:42.787202 | orchestrator | testbed-node-1 : ok=13  changed=7  unreachable=0 failed=1  skipped=10  rescued=0 ignored=0 2026-03-17 01:07:42.787207 | orchestrator | testbed-node-2 : ok=13  changed=7  unreachable=0 failed=1  skipped=10  rescued=0 ignored=0 2026-03-17 01:07:42.787213 | orchestrator | 2026-03-17 01:07:42.787218 | orchestrator | 2026-03-17 01:07:42.787223 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 01:07:42.787230 | orchestrator | Tuesday 17 March 2026 01:07:39 +0000 (0:00:03.198) 0:01:29.234 ********* 2026-03-17 01:07:42.787234 | orchestrator | =============================================================================== 2026-03-17 01:07:42.787239 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 6.26s 2026-03-17 01:07:42.787243 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.25s 2026-03-17 01:07:42.787251 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 5.80s 2026-03-17 01:07:42.787257 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 5.29s 2026-03-17 01:07:42.787265 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 4.52s 2026-03-17 01:07:42.787272 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 4.47s 2026-03-17 01:07:42.787277 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 4.32s 2026-03-17 01:07:42.787282 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 4.03s 2026-03-17 01:07:42.787287 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 3.91s 2026-03-17 01:07:42.787292 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 3.79s 2026-03-17 01:07:42.787297 | orchestrator | service-ks-register : glance | Creating users --------------------------- 3.74s 2026-03-17 01:07:42.787302 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.65s 2026-03-17 01:07:42.787308 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 3.52s 2026-03-17 01:07:42.787313 | orchestrator | glance : Ensuring config directories exist ------------------------------ 3.40s 2026-03-17 01:07:42.787317 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 3.30s 2026-03-17 01:07:42.787322 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 3.30s 2026-03-17 01:07:42.787327 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 3.20s 2026-03-17 01:07:42.787331 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 3.16s 2026-03-17 01:07:42.787336 | orchestrator | glance : Copying over config.json files for services -------------------- 3.02s 2026-03-17 01:07:42.787341 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 2.88s 2026-03-17 01:07:42.787346 | orchestrator | 2026-03-17 01:07:42 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:07:42.788780 | orchestrator | 2026-03-17 01:07:42 | INFO  | Task 13e17ca2-022e-4adb-aab8-c9087b7d146a is in state STARTED 2026-03-17 01:07:42.790210 | orchestrator | 2026-03-17 01:07:42 | INFO  | Task 0c747add-54b8-407e-aa40-bfb32ea89c30 is in state STARTED 2026-03-17 01:07:42.790242 | orchestrator | 2026-03-17 01:07:42 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:07:45.835541 | orchestrator | 2026-03-17 01:07:45 | INFO  | Task ff7a6dbd-afe2-4338-b6a8-373507d5f512 is in state STARTED 2026-03-17 01:07:45.835720 | orchestrator | 2026-03-17 01:07:45 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:07:45.836759 | orchestrator | 2026-03-17 01:07:45 | INFO  | Task 13e17ca2-022e-4adb-aab8-c9087b7d146a is in state STARTED 2026-03-17 01:07:45.838089 | orchestrator | 2026-03-17 01:07:45 | INFO  | Task 0c747add-54b8-407e-aa40-bfb32ea89c30 is in state STARTED 2026-03-17 01:07:45.838130 | orchestrator | 2026-03-17 01:07:45 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:07:48.884554 | orchestrator | 2026-03-17 01:07:48 | INFO  | Task ff7a6dbd-afe2-4338-b6a8-373507d5f512 is in state STARTED 2026-03-17 01:07:48.884613 | orchestrator | 2026-03-17 01:07:48 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:07:48.885644 | orchestrator | 2026-03-17 01:07:48 | INFO  | Task 13e17ca2-022e-4adb-aab8-c9087b7d146a is in state STARTED 2026-03-17 01:07:48.886552 | orchestrator | 2026-03-17 01:07:48 | INFO  | Task 0c747add-54b8-407e-aa40-bfb32ea89c30 is in state STARTED 2026-03-17 01:07:48.886581 | orchestrator | 2026-03-17 01:07:48 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:07:51.965597 | orchestrator | 2026-03-17 01:07:51 | INFO  | Task ff7a6dbd-afe2-4338-b6a8-373507d5f512 is in state STARTED 2026-03-17 01:07:51.967271 | orchestrator | 2026-03-17 01:07:51 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:07:51.969410 | orchestrator | 2026-03-17 01:07:51 | INFO  | Task 13e17ca2-022e-4adb-aab8-c9087b7d146a is in state STARTED 2026-03-17 01:07:51.971263 | orchestrator | 2026-03-17 01:07:51 | INFO  | Task 0c747add-54b8-407e-aa40-bfb32ea89c30 is in state STARTED 2026-03-17 01:07:51.971303 | orchestrator | 2026-03-17 01:07:51 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:07:55.021008 | orchestrator | 2026-03-17 01:07:55 | INFO  | Task ff7a6dbd-afe2-4338-b6a8-373507d5f512 is in state STARTED 2026-03-17 01:07:55.022134 | orchestrator | 2026-03-17 01:07:55 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:07:55.023021 | orchestrator | 2026-03-17 01:07:55 | INFO  | Task 13e17ca2-022e-4adb-aab8-c9087b7d146a is in state STARTED 2026-03-17 01:07:55.024180 | orchestrator | 2026-03-17 01:07:55 | INFO  | Task 0c747add-54b8-407e-aa40-bfb32ea89c30 is in state STARTED 2026-03-17 01:07:55.024276 | orchestrator | 2026-03-17 01:07:55 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:07:58.068162 | orchestrator | 2026-03-17 01:07:58 | INFO  | Task ff7a6dbd-afe2-4338-b6a8-373507d5f512 is in state STARTED 2026-03-17 01:07:58.070889 | orchestrator | 2026-03-17 01:07:58 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:07:58.072358 | orchestrator | 2026-03-17 01:07:58 | INFO  | Task 13e17ca2-022e-4adb-aab8-c9087b7d146a is in state STARTED 2026-03-17 01:07:58.074928 | orchestrator | 2026-03-17 01:07:58 | INFO  | Task 0c747add-54b8-407e-aa40-bfb32ea89c30 is in state STARTED 2026-03-17 01:07:58.074971 | orchestrator | 2026-03-17 01:07:58 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:08:01.112794 | orchestrator | 2026-03-17 01:08:01 | INFO  | Task ff7a6dbd-afe2-4338-b6a8-373507d5f512 is in state STARTED 2026-03-17 01:08:01.112838 | orchestrator | 2026-03-17 01:08:01 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:08:01.112844 | orchestrator | 2026-03-17 01:08:01 | INFO  | Task 13e17ca2-022e-4adb-aab8-c9087b7d146a is in state STARTED 2026-03-17 01:08:01.113334 | orchestrator | 2026-03-17 01:08:01 | INFO  | Task 0c747add-54b8-407e-aa40-bfb32ea89c30 is in state STARTED 2026-03-17 01:08:01.113405 | orchestrator | 2026-03-17 01:08:01 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:08:04.153386 | orchestrator | 2026-03-17 01:08:04 | INFO  | Task ff7a6dbd-afe2-4338-b6a8-373507d5f512 is in state STARTED 2026-03-17 01:08:04.155213 | orchestrator | 2026-03-17 01:08:04 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:08:04.156842 | orchestrator | 2026-03-17 01:08:04 | INFO  | Task 13e17ca2-022e-4adb-aab8-c9087b7d146a is in state STARTED 2026-03-17 01:08:04.158346 | orchestrator | 2026-03-17 01:08:04 | INFO  | Task 0c747add-54b8-407e-aa40-bfb32ea89c30 is in state STARTED 2026-03-17 01:08:04.158407 | orchestrator | 2026-03-17 01:08:04 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:08:07.204738 | orchestrator | 2026-03-17 01:08:07 | INFO  | Task ff7a6dbd-afe2-4338-b6a8-373507d5f512 is in state STARTED 2026-03-17 01:08:07.207685 | orchestrator | 2026-03-17 01:08:07 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:08:07.209123 | orchestrator | 2026-03-17 01:08:07 | INFO  | Task 13e17ca2-022e-4adb-aab8-c9087b7d146a is in state STARTED 2026-03-17 01:08:07.211063 | orchestrator | 2026-03-17 01:08:07 | INFO  | Task 0c747add-54b8-407e-aa40-bfb32ea89c30 is in state STARTED 2026-03-17 01:08:07.211100 | orchestrator | 2026-03-17 01:08:07 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:08:10.260909 | orchestrator | 2026-03-17 01:08:10 | INFO  | Task ff7a6dbd-afe2-4338-b6a8-373507d5f512 is in state STARTED 2026-03-17 01:08:10.260956 | orchestrator | 2026-03-17 01:08:10 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:08:10.265487 | orchestrator | 2026-03-17 01:08:10 | INFO  | Task 13e17ca2-022e-4adb-aab8-c9087b7d146a is in state SUCCESS 2026-03-17 01:08:10.266123 | orchestrator | 2026-03-17 01:08:10.266184 | orchestrator | 2026-03-17 01:08:10.266193 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-17 01:08:10.266200 | orchestrator | 2026-03-17 01:08:10.266206 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-17 01:08:10.266210 | orchestrator | Tuesday 17 March 2026 01:05:15 +0000 (0:00:00.279) 0:00:00.279 ********* 2026-03-17 01:08:10.266213 | orchestrator | ok: [testbed-manager] 2026-03-17 01:08:10.266218 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:08:10.266221 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:08:10.266224 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:08:10.266227 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:08:10.266230 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:08:10.266234 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:08:10.266237 | orchestrator | 2026-03-17 01:08:10.266240 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-17 01:08:10.266243 | orchestrator | Tuesday 17 March 2026 01:05:16 +0000 (0:00:00.618) 0:00:00.898 ********* 2026-03-17 01:08:10.266247 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-03-17 01:08:10.266250 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-03-17 01:08:10.266253 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-03-17 01:08:10.266256 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-03-17 01:08:10.266259 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-03-17 01:08:10.266262 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-03-17 01:08:10.266265 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-03-17 01:08:10.266269 | orchestrator | 2026-03-17 01:08:10.266272 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-03-17 01:08:10.266275 | orchestrator | 2026-03-17 01:08:10.266286 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-03-17 01:08:10.266291 | orchestrator | Tuesday 17 March 2026 01:05:16 +0000 (0:00:00.708) 0:00:01.607 ********* 2026-03-17 01:08:10.266297 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 01:08:10.266303 | orchestrator | 2026-03-17 01:08:10.266308 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-03-17 01:08:10.266325 | orchestrator | Tuesday 17 March 2026 01:05:18 +0000 (0:00:01.104) 0:00:02.712 ********* 2026-03-17 01:08:10.266330 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-17 01:08:10.266335 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-17 01:08:10.266339 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-17 01:08:10.266371 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-17 01:08:10.266386 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-17 01:08:10.266392 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-17 01:08:10.266401 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-17 01:08:10.266411 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:08:10.266417 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:08:10.266423 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:08:10.266428 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-17 01:08:10.266435 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-17 01:08:10.266446 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-17 01:08:10.266452 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-17 01:08:10.266462 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:08:10.266469 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:08:10.266473 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:08:10.266477 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-17 01:08:10.266481 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-17 01:08:10.266488 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-17 01:08:10.266491 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-17 01:08:10.266496 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-17 01:08:10.266502 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-17 01:08:10.266505 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:08:10.266509 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-17 01:08:10.266512 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-17 01:08:10.266515 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:08:10.266521 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:08:10.266525 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:08:10.266531 | orchestrator | 2026-03-17 01:08:10.266535 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-03-17 01:08:10.266538 | orchestrator | Tuesday 17 March 2026 01:05:21 +0000 (0:00:03.547) 0:00:06.259 ********* 2026-03-17 01:08:10.266543 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 01:08:10.266547 | orchestrator | 2026-03-17 01:08:10.266551 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-03-17 01:08:10.266554 | orchestrator | Tuesday 17 March 2026 01:05:22 +0000 (0:00:01.224) 0:00:07.483 ********* 2026-03-17 01:08:10.266558 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-17 01:08:10.266561 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-17 01:08:10.266565 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-17 01:08:10.266568 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-17 01:08:10.266927 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-17 01:08:10.266943 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-17 01:08:10.266953 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-17 01:08:10.266963 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-17 01:08:10.266967 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:08:10.266971 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:08:10.266975 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:08:10.266979 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-17 01:08:10.266987 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-17 01:08:10.266994 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-17 01:08:10.266999 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-17 01:08:10.267003 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:08:10.267007 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:08:10.267011 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:08:10.267025 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-17 01:08:10.267030 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-17 01:08:10.267037 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-17 01:08:10.267045 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-17 01:08:10.267051 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-17 01:08:10.267056 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-17 01:08:10.267059 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-17 01:08:10.267065 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:08:10.267071 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:08:10.267079 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:08:10.267088 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:08:10.267092 | orchestrator | 2026-03-17 01:08:10.267096 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-03-17 01:08:10.267100 | orchestrator | Tuesday 17 March 2026 01:05:27 +0000 (0:00:04.608) 0:00:12.092 ********* 2026-03-17 01:08:10.267106 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-17 01:08:10.267109 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-17 01:08:10.267113 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-17 01:08:10.267117 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-17 01:08:10.267126 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 01:08:10.267130 | orchestrator | skipping: [testbed-manager] 2026-03-17 01:08:10.267135 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-17 01:08:10.267140 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 01:08:10.267148 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 01:08:10.267159 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-17 01:08:10.267164 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 01:08:10.267167 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:08:10.267172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-17 01:08:10.267175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 01:08:10.267184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 01:08:10.267188 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-17 01:08:10.267493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 01:08:10.267511 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:08:10.267515 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-17 01:08:10.267520 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-17 01:08:10.267524 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-17 01:08:10.267528 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:08:10.267532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-17 01:08:10.267540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 01:08:10.267556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 01:08:10.267561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-17 01:08:10.267567 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 01:08:10.267570 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:08:10.267574 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-17 01:08:10.267578 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-17 01:08:10.267582 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-17 01:08:10.267588 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:08:10.267592 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-17 01:08:10.267596 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-17 01:08:10.267608 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-17 01:08:10.267612 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:08:10.267616 | orchestrator | 2026-03-17 01:08:10.267619 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-03-17 01:08:10.267623 | orchestrator | Tuesday 17 March 2026 01:05:29 +0000 (0:00:02.093) 0:00:14.186 ********* 2026-03-17 01:08:10.267629 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-17 01:08:10.267633 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-17 01:08:10.267637 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-17 01:08:10.267645 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-17 01:08:10.267649 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 01:08:10.267653 | orchestrator | skipping: [testbed-manager] 2026-03-17 01:08:10.267664 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-17 01:08:10.267669 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 01:08:10.267674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 01:08:10.267678 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-17 01:08:10.267682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 01:08:10.267688 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-17 01:08:10.267692 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 01:08:10.267696 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 01:08:10.267706 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-17 01:08:10.267710 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 01:08:10.267716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-17 01:08:10.267719 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 01:08:10.267723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 01:08:10.267729 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:08:10.267733 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:08:10.267736 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-17 01:08:10.267740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 01:08:10.267744 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:08:10.267755 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-17 01:08:10.267759 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-17 01:08:10.267763 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-17 01:08:10.267767 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:08:10.267802 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-17 01:08:10.267806 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-17 01:08:10.267813 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-17 01:08:10.267817 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:08:10.267820 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-17 01:08:10.267824 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-17 01:08:10.267830 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-17 01:08:10.267834 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:08:10.267838 | orchestrator | 2026-03-17 01:08:10.267842 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-03-17 01:08:10.267845 | orchestrator | Tuesday 17 March 2026 01:05:32 +0000 (0:00:02.658) 0:00:16.844 ********* 2026-03-17 01:08:10.267849 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-17 01:08:10.267855 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-17 01:08:10.267861 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-17 01:08:10.267865 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-17 01:08:10.267868 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-17 01:08:10.267872 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-17 01:08:10.267879 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:08:10.267883 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-17 01:08:10.267886 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:08:10.267892 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-17 01:08:10.267898 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-17 01:08:10.267902 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:08:10.267942 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-17 01:08:10.267947 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-17 01:08:10.267954 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:08:10.267958 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:08:10.267987 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-17 01:08:10.267994 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-17 01:08:10.267998 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-17 01:08:10.268002 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-17 01:08:10.268006 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:08:10.268012 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-17 01:08:10.268016 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-17 01:08:10.268022 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-17 01:08:10.268028 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:08:10.268032 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-17 01:08:10.268036 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:08:10.268040 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:08:10.268043 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:08:10.268047 | orchestrator | 2026-03-17 01:08:10.268050 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-03-17 01:08:10.268054 | orchestrator | Tuesday 17 March 2026 01:05:39 +0000 (0:00:07.528) 0:00:24.372 ********* 2026-03-17 01:08:10.268058 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-17 01:08:10.268062 | orchestrator | 2026-03-17 01:08:10.268065 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-03-17 01:08:10.268071 | orchestrator | Tuesday 17 March 2026 01:05:40 +0000 (0:00:00.920) 0:00:25.293 ********* 2026-03-17 01:08:10.268075 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 2106933, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7143126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.268085 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 2106933, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7143126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.268089 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 2106933, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7143126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.268093 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 2106933, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7143126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-17 01:08:10.268097 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 2106933, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7143126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.268101 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 2106933, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7143126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.268107 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 2106943, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7189586, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.268111 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 2106943, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7189586, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.268117 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 2106943, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7189586, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.268121 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 2106943, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7189586, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.268139 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 2106933, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7143126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.268144 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 2106943, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7189586, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.268147 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 2106931, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7133958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.268153 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 2106931, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7133958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.268167 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 2106931, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7133958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.268172 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 2106931, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7133958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.268176 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 2106943, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7189586, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.268180 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 2106943, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7189586, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-17 01:08:10.268184 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 2106939, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7169297, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.268188 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 2106931, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7133958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.268194 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 2106939, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7169297, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.268201 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 2106931, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7133958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.268207 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 2106929, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7128568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.268214 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 2106939, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7169297, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.268220 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 2106939, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7169297, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.268225 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 2106939, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7169297, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.268230 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 2106939, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7169297, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.268235 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 2106934, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7147026, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.268274 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 2106929, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7128568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.268296 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 2106931, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7133958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-17 01:08:10.268305 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 2106929, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7128568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.268311 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 2106929, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7128568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.268316 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 2106938, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7159586, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.268322 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 2106929, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7128568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.268327 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 2106929, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7128568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.268341 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 2106934, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7147026, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.268372 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 2106934, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7147026, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.268379 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 2106938, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7159586, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.268385 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 2106935, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7149117, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.268394 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 2106934, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7147026, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.268401 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 2106934, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7147026, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.268410 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 2106934, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7147026, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.268419 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 2106938, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7159586, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.268425 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 2106938, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7159586, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.268434 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 2106938, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7159586, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.268875 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 2106935, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7149117, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.268897 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 2106932, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.714036, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.268903 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 2106938, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7159586, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.268915 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 2106935, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7149117, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.268941 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 2106942, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7181091, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.268948 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 2106935, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7149117, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.268987 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 2106932, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.714036, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.268993 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 2106935, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7149117, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.268999 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 2106927, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7123582, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.269004 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 2106932, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.714036, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.269015 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 2106932, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.714036, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.269034 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 2106939, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7169297, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-17 01:08:10.269038 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 2106935, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7149117, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.269044 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 2106932, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.714036, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.269047 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 2106942, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7181091, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.269051 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 2106949, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7217052, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.269060 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 2106942, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7181091, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.269065 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 2106932, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.714036, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.269084 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 2106942, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7181091, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.269089 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 2106942, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7181091, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.269097 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 2106927, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7123582, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.269103 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 2106927, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7123582, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.269108 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 2106941, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7177052, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.269117 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 2106942, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7181091, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.269123 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 2106927, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7123582, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.269142 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 2106927, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7123582, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.269146 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 2106929, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7128568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-17 01:08:10.269151 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 2106949, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7217052, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.269154 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 2106949, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7217052, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.269158 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 2106927, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7123582, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.269199 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 2106930, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7130928, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.269220 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 2106949, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7217052, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.269230 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 2106941, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7177052, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.269236 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 2106941, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7177052, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.269245 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 2106949, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7217052, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.269251 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 2106941, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7177052, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.269260 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 2106930, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7130928, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.269264 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 2106949, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7217052, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.269267 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 2106928, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7126358, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.269274 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 2106930, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7130928, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.269278 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 2106930, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7130928, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.269283 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 2106941, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7177052, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.269286 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 2106928, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7126358, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.269292 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 2106941, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7177052, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.269295 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 2106928, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7126358, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.269298 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 2106934, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7147026, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-17 01:08:10.269304 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 2106930, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7130928, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.269308 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 2106928, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7126358, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.269313 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 2106937, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7158294, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.269316 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 2106937, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7158294, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.269322 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 2106928, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7126358, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.269325 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 2106930, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7130928, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.269328 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 2106936, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7149584, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.269335 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 2106937, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7158294, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.269341 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 2106937, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7158294, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.269384 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 2106937, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7158294, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.269392 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 2106936, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7149584, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.269402 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 2106936, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7149584, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.269407 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 2106948, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7209587, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.269411 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 2106948, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7209587, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.269416 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:08:10.269424 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 2106928, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7126358, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.269429 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:08:10.269434 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 2106936, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7149584, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.269442 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 2106936, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7149584, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.269452 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 2106948, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7209587, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.269457 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:08:10.269462 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 2106937, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7158294, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.269466 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 2106948, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7209587, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.269471 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:08:10.269476 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 2106938, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7159586, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-17 01:08:10.269484 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 2106948, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7209587, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.269490 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:08:10.269526 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 2106936, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7149584, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.269535 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 2106948, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7209587, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:08:10.269545 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:08:10.269550 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 2106935, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7149117, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-17 01:08:10.269556 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 2106932, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.714036, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-17 01:08:10.269561 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 2106942, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7181091, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-17 01:08:10.269567 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 2106927, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7123582, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-17 01:08:10.269576 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 2106949, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7217052, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-17 01:08:10.269582 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 2106941, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7177052, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-17 01:08:10.269594 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 2106930, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7130928, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-17 01:08:10.269600 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 2106928, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7126358, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-17 01:08:10.269605 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 2106937, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7158294, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-17 01:08:10.269610 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 2106936, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7149584, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-17 01:08:10.269616 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 2106948, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7209587, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-17 01:08:10.269648 | orchestrator | 2026-03-17 01:08:10.269653 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-03-17 01:08:10.269657 | orchestrator | Tuesday 17 March 2026 01:06:03 +0000 (0:00:22.640) 0:00:47.934 ********* 2026-03-17 01:08:10.269660 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-17 01:08:10.269663 | orchestrator | 2026-03-17 01:08:10.269668 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-03-17 01:08:10.269672 | orchestrator | Tuesday 17 March 2026 01:06:04 +0000 (0:00:01.027) 0:00:48.961 ********* 2026-03-17 01:08:10.269675 | orchestrator | [WARNING]: Skipped 2026-03-17 01:08:10.269679 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-17 01:08:10.269682 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-03-17 01:08:10.269685 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-17 01:08:10.269692 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-03-17 01:08:10.269695 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-17 01:08:10.269698 | orchestrator | [WARNING]: Skipped 2026-03-17 01:08:10.269701 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-17 01:08:10.269704 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-03-17 01:08:10.269708 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-17 01:08:10.269711 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-03-17 01:08:10.269714 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-17 01:08:10.269717 | orchestrator | [WARNING]: Skipped 2026-03-17 01:08:10.269720 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-17 01:08:10.269723 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-03-17 01:08:10.269726 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-17 01:08:10.269731 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-03-17 01:08:10.269735 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-17 01:08:10.269738 | orchestrator | [WARNING]: Skipped 2026-03-17 01:08:10.269741 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-17 01:08:10.269744 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-03-17 01:08:10.269747 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-17 01:08:10.269750 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-03-17 01:08:10.269753 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-17 01:08:10.269756 | orchestrator | [WARNING]: Skipped 2026-03-17 01:08:10.269759 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-17 01:08:10.269763 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-03-17 01:08:10.269766 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-17 01:08:10.269769 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-03-17 01:08:10.269772 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-17 01:08:10.269775 | orchestrator | [WARNING]: Skipped 2026-03-17 01:08:10.269778 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-17 01:08:10.269781 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-03-17 01:08:10.269832 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-17 01:08:10.269838 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-03-17 01:08:10.269844 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-17 01:08:10.269862 | orchestrator | [WARNING]: Skipped 2026-03-17 01:08:10.269868 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-17 01:08:10.269873 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-03-17 01:08:10.269878 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-17 01:08:10.269883 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-03-17 01:08:10.269888 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-17 01:08:10.269894 | orchestrator | 2026-03-17 01:08:10.269899 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-03-17 01:08:10.269904 | orchestrator | Tuesday 17 March 2026 01:06:07 +0000 (0:00:02.770) 0:00:51.731 ********* 2026-03-17 01:08:10.269910 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-17 01:08:10.269914 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:08:10.269917 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-17 01:08:10.269924 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:08:10.269928 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-17 01:08:10.269931 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:08:10.269935 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-17 01:08:10.269941 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:08:10.269945 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-17 01:08:10.269951 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:08:10.269956 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-17 01:08:10.269961 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:08:10.269966 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-03-17 01:08:10.269972 | orchestrator | 2026-03-17 01:08:10.269978 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-03-17 01:08:10.269983 | orchestrator | Tuesday 17 March 2026 01:06:20 +0000 (0:00:13.813) 0:01:05.545 ********* 2026-03-17 01:08:10.269994 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-17 01:08:10.270000 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:08:10.270006 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-17 01:08:10.270009 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:08:10.270049 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-17 01:08:10.270056 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:08:10.270062 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-17 01:08:10.270066 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:08:10.270071 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-17 01:08:10.270076 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:08:10.270080 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-17 01:08:10.270085 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:08:10.270090 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-03-17 01:08:10.270095 | orchestrator | 2026-03-17 01:08:10.270100 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-03-17 01:08:10.270105 | orchestrator | Tuesday 17 March 2026 01:06:23 +0000 (0:00:02.756) 0:01:08.302 ********* 2026-03-17 01:08:10.270110 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-17 01:08:10.270120 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:08:10.270126 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-17 01:08:10.270132 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-17 01:08:10.270137 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:08:10.270142 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:08:10.270145 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-17 01:08:10.270148 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:08:10.270151 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-17 01:08:10.270154 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:08:10.270158 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-17 01:08:10.270164 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:08:10.270167 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-03-17 01:08:10.270171 | orchestrator | 2026-03-17 01:08:10.270174 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-03-17 01:08:10.270177 | orchestrator | Tuesday 17 March 2026 01:06:24 +0000 (0:00:01.211) 0:01:09.513 ********* 2026-03-17 01:08:10.270180 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-17 01:08:10.270183 | orchestrator | 2026-03-17 01:08:10.270187 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-03-17 01:08:10.270190 | orchestrator | Tuesday 17 March 2026 01:06:25 +0000 (0:00:00.699) 0:01:10.213 ********* 2026-03-17 01:08:10.270193 | orchestrator | skipping: [testbed-manager] 2026-03-17 01:08:10.270196 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:08:10.270199 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:08:10.270202 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:08:10.270206 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:08:10.270209 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:08:10.270212 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:08:10.270215 | orchestrator | 2026-03-17 01:08:10.270218 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-03-17 01:08:10.270221 | orchestrator | Tuesday 17 March 2026 01:06:26 +0000 (0:00:00.683) 0:01:10.897 ********* 2026-03-17 01:08:10.270224 | orchestrator | skipping: [testbed-manager] 2026-03-17 01:08:10.270227 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:08:10.270230 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:08:10.270233 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:08:10.270237 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:08:10.270240 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:08:10.270243 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:08:10.270246 | orchestrator | 2026-03-17 01:08:10.270250 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-03-17 01:08:10.270253 | orchestrator | Tuesday 17 March 2026 01:06:28 +0000 (0:00:01.834) 0:01:12.731 ********* 2026-03-17 01:08:10.270256 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-17 01:08:10.270261 | orchestrator | skipping: [testbed-manager] 2026-03-17 01:08:10.270266 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-17 01:08:10.270273 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:08:10.270280 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-17 01:08:10.270285 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:08:10.270290 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-17 01:08:10.270295 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:08:10.270306 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-17 01:08:10.270311 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:08:10.270316 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-17 01:08:10.270321 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:08:10.270326 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-17 01:08:10.270377 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:08:10.270384 | orchestrator | 2026-03-17 01:08:10.270389 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-03-17 01:08:10.270394 | orchestrator | Tuesday 17 March 2026 01:06:29 +0000 (0:00:01.357) 0:01:14.088 ********* 2026-03-17 01:08:10.270399 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-17 01:08:10.270410 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:08:10.270414 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-17 01:08:10.270419 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:08:10.270424 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-17 01:08:10.270430 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:08:10.270435 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-17 01:08:10.270440 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:08:10.270446 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-17 01:08:10.270449 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:08:10.270453 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-17 01:08:10.270456 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:08:10.270459 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-03-17 01:08:10.270498 | orchestrator | 2026-03-17 01:08:10.270502 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-03-17 01:08:10.270505 | orchestrator | Tuesday 17 March 2026 01:06:30 +0000 (0:00:01.299) 0:01:15.388 ********* 2026-03-17 01:08:10.270508 | orchestrator | [WARNING]: Skipped 2026-03-17 01:08:10.270512 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-03-17 01:08:10.270516 | orchestrator | due to this access issue: 2026-03-17 01:08:10.270519 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-03-17 01:08:10.270522 | orchestrator | not a directory 2026-03-17 01:08:10.270526 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-17 01:08:10.270529 | orchestrator | 2026-03-17 01:08:10.270532 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-03-17 01:08:10.270537 | orchestrator | Tuesday 17 March 2026 01:06:31 +0000 (0:00:01.036) 0:01:16.424 ********* 2026-03-17 01:08:10.270542 | orchestrator | skipping: [testbed-manager] 2026-03-17 01:08:10.270547 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:08:10.270552 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:08:10.270558 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:08:10.270563 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:08:10.270568 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:08:10.270573 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:08:10.270578 | orchestrator | 2026-03-17 01:08:10.270583 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-03-17 01:08:10.270589 | orchestrator | Tuesday 17 March 2026 01:06:32 +0000 (0:00:00.601) 0:01:17.026 ********* 2026-03-17 01:08:10.270594 | orchestrator | skipping: [testbed-manager] 2026-03-17 01:08:10.270599 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:08:10.270659 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:08:10.270663 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:08:10.270666 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:08:10.270670 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:08:10.270673 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:08:10.270676 | orchestrator | 2026-03-17 01:08:10.270679 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2026-03-17 01:08:10.270683 | orchestrator | Tuesday 17 March 2026 01:06:33 +0000 (0:00:00.697) 0:01:17.724 ********* 2026-03-17 01:08:10.270688 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-17 01:08:10.270703 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-17 01:08:10.270707 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-17 01:08:10.270715 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-17 01:08:10.270719 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-17 01:08:10.270722 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-17 01:08:10.270726 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-17 01:08:10.270729 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-17 01:08:10.270736 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:08:10.270742 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:08:10.270746 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:08:10.270752 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-17 01:08:10.270756 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-17 01:08:10.270760 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-17 01:08:10.270763 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-17 01:08:10.270766 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:08:10.270773 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:08:10.270780 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:08:10.270804 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-17 01:08:10.270810 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-17 01:08:10.270814 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-17 01:08:10.270817 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-17 01:08:10.270820 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-17 01:08:10.270828 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-17 01:08:10.270833 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:08:10.270837 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-17 01:08:10.270842 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:08:10.270846 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:08:10.270849 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:08:10.270852 | orchestrator | 2026-03-17 01:08:10.270855 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-03-17 01:08:10.270859 | orchestrator | Tuesday 17 March 2026 01:06:36 +0000 (0:00:03.787) 0:01:21.511 ********* 2026-03-17 01:08:10.270864 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-17 01:08:10.270868 | orchestrator | skipping: [testbed-manager] 2026-03-17 01:08:10.270871 | orchestrator | 2026-03-17 01:08:10.270874 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-17 01:08:10.270877 | orchestrator | Tuesday 17 March 2026 01:06:37 +0000 (0:00:00.953) 0:01:22.465 ********* 2026-03-17 01:08:10.270881 | orchestrator | 2026-03-17 01:08:10.270884 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-17 01:08:10.270887 | orchestrator | Tuesday 17 March 2026 01:06:37 +0000 (0:00:00.065) 0:01:22.530 ********* 2026-03-17 01:08:10.270890 | orchestrator | 2026-03-17 01:08:10.270893 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-17 01:08:10.270896 | orchestrator | Tuesday 17 March 2026 01:06:37 +0000 (0:00:00.061) 0:01:22.592 ********* 2026-03-17 01:08:10.270900 | orchestrator | 2026-03-17 01:08:10.270903 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-17 01:08:10.270906 | orchestrator | Tuesday 17 March 2026 01:06:38 +0000 (0:00:00.093) 0:01:22.685 ********* 2026-03-17 01:08:10.270909 | orchestrator | 2026-03-17 01:08:10.270913 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-17 01:08:10.270916 | orchestrator | Tuesday 17 March 2026 01:06:38 +0000 (0:00:00.063) 0:01:22.748 ********* 2026-03-17 01:08:10.270919 | orchestrator | 2026-03-17 01:08:10.270923 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-17 01:08:10.270926 | orchestrator | Tuesday 17 March 2026 01:06:38 +0000 (0:00:00.060) 0:01:22.809 ********* 2026-03-17 01:08:10.270929 | orchestrator | 2026-03-17 01:08:10.270933 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-17 01:08:10.270936 | orchestrator | Tuesday 17 March 2026 01:06:38 +0000 (0:00:00.061) 0:01:22.871 ********* 2026-03-17 01:08:10.270939 | orchestrator | 2026-03-17 01:08:10.270942 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-03-17 01:08:10.270946 | orchestrator | Tuesday 17 March 2026 01:06:38 +0000 (0:00:00.086) 0:01:22.957 ********* 2026-03-17 01:08:10.270949 | orchestrator | changed: [testbed-manager] 2026-03-17 01:08:10.270952 | orchestrator | 2026-03-17 01:08:10.270955 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-03-17 01:08:10.270961 | orchestrator | Tuesday 17 March 2026 01:06:53 +0000 (0:00:15.448) 0:01:38.406 ********* 2026-03-17 01:08:10.270965 | orchestrator | changed: [testbed-manager] 2026-03-17 01:08:10.270968 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:08:10.270971 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:08:10.270974 | orchestrator | changed: [testbed-node-3] 2026-03-17 01:08:10.270978 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:08:10.270981 | orchestrator | changed: [testbed-node-4] 2026-03-17 01:08:10.270984 | orchestrator | changed: [testbed-node-5] 2026-03-17 01:08:10.270987 | orchestrator | 2026-03-17 01:08:10.270990 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-03-17 01:08:10.270994 | orchestrator | Tuesday 17 March 2026 01:07:08 +0000 (0:00:14.570) 0:01:52.976 ********* 2026-03-17 01:08:10.270997 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:08:10.271000 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:08:10.271004 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:08:10.271007 | orchestrator | 2026-03-17 01:08:10.271010 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-03-17 01:08:10.271013 | orchestrator | Tuesday 17 March 2026 01:07:18 +0000 (0:00:10.365) 0:02:03.342 ********* 2026-03-17 01:08:10.271016 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:08:10.271020 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:08:10.271026 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:08:10.271033 | orchestrator | 2026-03-17 01:08:10.271039 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-03-17 01:08:10.271044 | orchestrator | Tuesday 17 March 2026 01:07:28 +0000 (0:00:09.986) 0:02:13.329 ********* 2026-03-17 01:08:10.271053 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:08:10.271059 | orchestrator | changed: [testbed-node-4] 2026-03-17 01:08:10.271064 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:08:10.271071 | orchestrator | changed: [testbed-node-5] 2026-03-17 01:08:10.271076 | orchestrator | changed: [testbed-node-3] 2026-03-17 01:08:10.271079 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:08:10.271082 | orchestrator | changed: [testbed-manager] 2026-03-17 01:08:10.271086 | orchestrator | 2026-03-17 01:08:10.271089 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-03-17 01:08:10.271092 | orchestrator | Tuesday 17 March 2026 01:07:42 +0000 (0:00:13.760) 0:02:27.089 ********* 2026-03-17 01:08:10.271096 | orchestrator | changed: [testbed-manager] 2026-03-17 01:08:10.271099 | orchestrator | 2026-03-17 01:08:10.271102 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-03-17 01:08:10.271105 | orchestrator | Tuesday 17 March 2026 01:07:50 +0000 (0:00:07.576) 0:02:34.666 ********* 2026-03-17 01:08:10.271108 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:08:10.271112 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:08:10.271115 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:08:10.271118 | orchestrator | 2026-03-17 01:08:10.271121 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-03-17 01:08:10.271124 | orchestrator | Tuesday 17 March 2026 01:07:54 +0000 (0:00:04.735) 0:02:39.402 ********* 2026-03-17 01:08:10.271127 | orchestrator | changed: [testbed-manager] 2026-03-17 01:08:10.271130 | orchestrator | 2026-03-17 01:08:10.271134 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-03-17 01:08:10.271137 | orchestrator | Tuesday 17 March 2026 01:07:59 +0000 (0:00:04.976) 0:02:44.379 ********* 2026-03-17 01:08:10.271140 | orchestrator | changed: [testbed-node-4] 2026-03-17 01:08:10.271143 | orchestrator | changed: [testbed-node-3] 2026-03-17 01:08:10.271147 | orchestrator | changed: [testbed-node-5] 2026-03-17 01:08:10.271150 | orchestrator | 2026-03-17 01:08:10.271153 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 01:08:10.271156 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-17 01:08:10.271160 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-17 01:08:10.271164 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-17 01:08:10.271167 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-17 01:08:10.271170 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-17 01:08:10.271219 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-17 01:08:10.271226 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-17 01:08:10.271231 | orchestrator | 2026-03-17 01:08:10.271236 | orchestrator | 2026-03-17 01:08:10.271242 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 01:08:10.271246 | orchestrator | Tuesday 17 March 2026 01:08:09 +0000 (0:00:09.748) 0:02:54.127 ********* 2026-03-17 01:08:10.271251 | orchestrator | =============================================================================== 2026-03-17 01:08:10.271256 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 22.64s 2026-03-17 01:08:10.271261 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 15.45s 2026-03-17 01:08:10.271270 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 14.57s 2026-03-17 01:08:10.271275 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 13.81s 2026-03-17 01:08:10.271304 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 13.76s 2026-03-17 01:08:10.271311 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 10.37s 2026-03-17 01:08:10.271316 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ------------ 9.99s 2026-03-17 01:08:10.271321 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container -------------- 9.75s 2026-03-17 01:08:10.271326 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 7.58s 2026-03-17 01:08:10.271331 | orchestrator | prometheus : Copying over config.json files ----------------------------- 7.53s 2026-03-17 01:08:10.271336 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 4.98s 2026-03-17 01:08:10.271359 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 4.74s 2026-03-17 01:08:10.271365 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 4.61s 2026-03-17 01:08:10.271371 | orchestrator | prometheus : Check prometheus containers -------------------------------- 3.79s 2026-03-17 01:08:10.271376 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.55s 2026-03-17 01:08:10.271381 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 2.77s 2026-03-17 01:08:10.271386 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 2.76s 2026-03-17 01:08:10.271392 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 2.66s 2026-03-17 01:08:10.271397 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS certificate --- 2.10s 2026-03-17 01:08:10.271405 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 1.83s 2026-03-17 01:08:10.271411 | orchestrator | 2026-03-17 01:08:10 | INFO  | Task 0c747add-54b8-407e-aa40-bfb32ea89c30 is in state STARTED 2026-03-17 01:08:10.271416 | orchestrator | 2026-03-17 01:08:10 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:08:13.324911 | orchestrator | 2026-03-17 01:08:13 | INFO  | Task ff7a6dbd-afe2-4338-b6a8-373507d5f512 is in state STARTED 2026-03-17 01:08:13.327286 | orchestrator | 2026-03-17 01:08:13 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:08:13.329710 | orchestrator | 2026-03-17 01:08:13 | INFO  | Task 1238657d-e96a-4dd3-8b3c-daf1ed1c293c is in state STARTED 2026-03-17 01:08:13.333916 | orchestrator | 2026-03-17 01:08:13 | INFO  | Task 0c747add-54b8-407e-aa40-bfb32ea89c30 is in state STARTED 2026-03-17 01:08:13.333967 | orchestrator | 2026-03-17 01:08:13 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:08:16.381047 | orchestrator | 2026-03-17 01:08:16 | INFO  | Task ff7a6dbd-afe2-4338-b6a8-373507d5f512 is in state STARTED 2026-03-17 01:08:16.383293 | orchestrator | 2026-03-17 01:08:16 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:08:16.385239 | orchestrator | 2026-03-17 01:08:16 | INFO  | Task 1238657d-e96a-4dd3-8b3c-daf1ed1c293c is in state STARTED 2026-03-17 01:08:16.387779 | orchestrator | 2026-03-17 01:08:16 | INFO  | Task 0c747add-54b8-407e-aa40-bfb32ea89c30 is in state STARTED 2026-03-17 01:08:16.387821 | orchestrator | 2026-03-17 01:08:16 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:08:19.415796 | orchestrator | 2026-03-17 01:08:19 | INFO  | Task ff7a6dbd-afe2-4338-b6a8-373507d5f512 is in state STARTED 2026-03-17 01:08:19.416912 | orchestrator | 2026-03-17 01:08:19 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:08:19.418443 | orchestrator | 2026-03-17 01:08:19 | INFO  | Task 1238657d-e96a-4dd3-8b3c-daf1ed1c293c is in state STARTED 2026-03-17 01:08:19.419529 | orchestrator | 2026-03-17 01:08:19 | INFO  | Task 0c747add-54b8-407e-aa40-bfb32ea89c30 is in state STARTED 2026-03-17 01:08:19.419832 | orchestrator | 2026-03-17 01:08:19 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:08:22.470937 | orchestrator | 2026-03-17 01:08:22 | INFO  | Task ff7a6dbd-afe2-4338-b6a8-373507d5f512 is in state STARTED 2026-03-17 01:08:22.472257 | orchestrator | 2026-03-17 01:08:22 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:08:22.473606 | orchestrator | 2026-03-17 01:08:22 | INFO  | Task 1238657d-e96a-4dd3-8b3c-daf1ed1c293c is in state STARTED 2026-03-17 01:08:22.475163 | orchestrator | 2026-03-17 01:08:22 | INFO  | Task 0c747add-54b8-407e-aa40-bfb32ea89c30 is in state STARTED 2026-03-17 01:08:22.475212 | orchestrator | 2026-03-17 01:08:22 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:08:25.557253 | orchestrator | 2026-03-17 01:08:25 | INFO  | Task ff7a6dbd-afe2-4338-b6a8-373507d5f512 is in state STARTED 2026-03-17 01:08:25.559008 | orchestrator | 2026-03-17 01:08:25 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:08:25.563668 | orchestrator | 2026-03-17 01:08:25 | INFO  | Task 1238657d-e96a-4dd3-8b3c-daf1ed1c293c is in state STARTED 2026-03-17 01:08:25.565371 | orchestrator | 2026-03-17 01:08:25 | INFO  | Task 0c747add-54b8-407e-aa40-bfb32ea89c30 is in state STARTED 2026-03-17 01:08:25.565411 | orchestrator | 2026-03-17 01:08:25 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:08:28.610210 | orchestrator | 2026-03-17 01:08:28 | INFO  | Task ff7a6dbd-afe2-4338-b6a8-373507d5f512 is in state STARTED 2026-03-17 01:08:28.610897 | orchestrator | 2026-03-17 01:08:28 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:08:28.612109 | orchestrator | 2026-03-17 01:08:28 | INFO  | Task 1238657d-e96a-4dd3-8b3c-daf1ed1c293c is in state STARTED 2026-03-17 01:08:28.613046 | orchestrator | 2026-03-17 01:08:28 | INFO  | Task 0c747add-54b8-407e-aa40-bfb32ea89c30 is in state STARTED 2026-03-17 01:08:28.613084 | orchestrator | 2026-03-17 01:08:28 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:08:31.654665 | orchestrator | 2026-03-17 01:08:31 | INFO  | Task ff7a6dbd-afe2-4338-b6a8-373507d5f512 is in state STARTED 2026-03-17 01:08:31.656617 | orchestrator | 2026-03-17 01:08:31 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:08:31.658497 | orchestrator | 2026-03-17 01:08:31 | INFO  | Task 1238657d-e96a-4dd3-8b3c-daf1ed1c293c is in state STARTED 2026-03-17 01:08:31.660208 | orchestrator | 2026-03-17 01:08:31 | INFO  | Task 0c747add-54b8-407e-aa40-bfb32ea89c30 is in state STARTED 2026-03-17 01:08:31.660365 | orchestrator | 2026-03-17 01:08:31 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:08:34.713360 | orchestrator | 2026-03-17 01:08:34 | INFO  | Task ff7a6dbd-afe2-4338-b6a8-373507d5f512 is in state STARTED 2026-03-17 01:08:34.714077 | orchestrator | 2026-03-17 01:08:34 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:08:34.715678 | orchestrator | 2026-03-17 01:08:34 | INFO  | Task 1238657d-e96a-4dd3-8b3c-daf1ed1c293c is in state STARTED 2026-03-17 01:08:34.717105 | orchestrator | 2026-03-17 01:08:34 | INFO  | Task 0c747add-54b8-407e-aa40-bfb32ea89c30 is in state STARTED 2026-03-17 01:08:34.717156 | orchestrator | 2026-03-17 01:08:34 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:08:37.764912 | orchestrator | 2026-03-17 01:08:37 | INFO  | Task ff7a6dbd-afe2-4338-b6a8-373507d5f512 is in state STARTED 2026-03-17 01:08:37.766475 | orchestrator | 2026-03-17 01:08:37 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:08:37.768390 | orchestrator | 2026-03-17 01:08:37 | INFO  | Task 1238657d-e96a-4dd3-8b3c-daf1ed1c293c is in state STARTED 2026-03-17 01:08:37.770065 | orchestrator | 2026-03-17 01:08:37 | INFO  | Task 0c747add-54b8-407e-aa40-bfb32ea89c30 is in state STARTED 2026-03-17 01:08:37.770127 | orchestrator | 2026-03-17 01:08:37 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:08:40.816654 | orchestrator | 2026-03-17 01:08:40 | INFO  | Task ff7a6dbd-afe2-4338-b6a8-373507d5f512 is in state STARTED 2026-03-17 01:08:40.817895 | orchestrator | 2026-03-17 01:08:40 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:08:40.821812 | orchestrator | 2026-03-17 01:08:40 | INFO  | Task 1238657d-e96a-4dd3-8b3c-daf1ed1c293c is in state STARTED 2026-03-17 01:08:40.823714 | orchestrator | 2026-03-17 01:08:40 | INFO  | Task 0c747add-54b8-407e-aa40-bfb32ea89c30 is in state STARTED 2026-03-17 01:08:40.823764 | orchestrator | 2026-03-17 01:08:40 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:08:43.875852 | orchestrator | 2026-03-17 01:08:43 | INFO  | Task ff7a6dbd-afe2-4338-b6a8-373507d5f512 is in state STARTED 2026-03-17 01:08:43.877654 | orchestrator | 2026-03-17 01:08:43 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:08:43.880398 | orchestrator | 2026-03-17 01:08:43 | INFO  | Task 1238657d-e96a-4dd3-8b3c-daf1ed1c293c is in state STARTED 2026-03-17 01:08:43.882637 | orchestrator | 2026-03-17 01:08:43 | INFO  | Task 0c747add-54b8-407e-aa40-bfb32ea89c30 is in state STARTED 2026-03-17 01:08:43.882689 | orchestrator | 2026-03-17 01:08:43 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:08:46.922589 | orchestrator | 2026-03-17 01:08:46 | INFO  | Task ff7a6dbd-afe2-4338-b6a8-373507d5f512 is in state STARTED 2026-03-17 01:08:46.923564 | orchestrator | 2026-03-17 01:08:46 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:08:46.924753 | orchestrator | 2026-03-17 01:08:46 | INFO  | Task 1238657d-e96a-4dd3-8b3c-daf1ed1c293c is in state STARTED 2026-03-17 01:08:46.926048 | orchestrator | 2026-03-17 01:08:46 | INFO  | Task 0c747add-54b8-407e-aa40-bfb32ea89c30 is in state STARTED 2026-03-17 01:08:46.926082 | orchestrator | 2026-03-17 01:08:46 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:08:49.963756 | orchestrator | 2026-03-17 01:08:49 | INFO  | Task ff7a6dbd-afe2-4338-b6a8-373507d5f512 is in state STARTED 2026-03-17 01:08:49.963853 | orchestrator | 2026-03-17 01:08:49 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:08:49.965263 | orchestrator | 2026-03-17 01:08:49 | INFO  | Task 1238657d-e96a-4dd3-8b3c-daf1ed1c293c is in state STARTED 2026-03-17 01:08:49.966176 | orchestrator | 2026-03-17 01:08:49 | INFO  | Task 0c747add-54b8-407e-aa40-bfb32ea89c30 is in state STARTED 2026-03-17 01:08:49.966220 | orchestrator | 2026-03-17 01:08:49 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:08:53.012112 | orchestrator | 2026-03-17 01:08:53 | INFO  | Task ff7a6dbd-afe2-4338-b6a8-373507d5f512 is in state STARTED 2026-03-17 01:08:53.016163 | orchestrator | 2026-03-17 01:08:53 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:08:53.016211 | orchestrator | 2026-03-17 01:08:53 | INFO  | Task 1238657d-e96a-4dd3-8b3c-daf1ed1c293c is in state STARTED 2026-03-17 01:08:53.016225 | orchestrator | 2026-03-17 01:08:53 | INFO  | Task 0c747add-54b8-407e-aa40-bfb32ea89c30 is in state STARTED 2026-03-17 01:08:53.016229 | orchestrator | 2026-03-17 01:08:53 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:08:56.063409 | orchestrator | 2026-03-17 01:08:56 | INFO  | Task ff7a6dbd-afe2-4338-b6a8-373507d5f512 is in state STARTED 2026-03-17 01:08:56.065050 | orchestrator | 2026-03-17 01:08:56 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:08:56.067428 | orchestrator | 2026-03-17 01:08:56 | INFO  | Task 1238657d-e96a-4dd3-8b3c-daf1ed1c293c is in state STARTED 2026-03-17 01:08:56.070772 | orchestrator | 2026-03-17 01:08:56 | INFO  | Task 0c747add-54b8-407e-aa40-bfb32ea89c30 is in state SUCCESS 2026-03-17 01:08:56.071217 | orchestrator | 2026-03-17 01:08:56 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:08:56.072384 | orchestrator | 2026-03-17 01:08:56.072429 | orchestrator | 2026-03-17 01:08:56.072435 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-17 01:08:56.072439 | orchestrator | 2026-03-17 01:08:56.072442 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-17 01:08:56.072446 | orchestrator | Tuesday 17 March 2026 01:06:28 +0000 (0:00:00.280) 0:00:00.280 ********* 2026-03-17 01:08:56.072449 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:08:56.072453 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:08:56.072457 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:08:56.072460 | orchestrator | 2026-03-17 01:08:56.072464 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-17 01:08:56.072467 | orchestrator | Tuesday 17 March 2026 01:06:28 +0000 (0:00:00.356) 0:00:00.637 ********* 2026-03-17 01:08:56.072470 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-03-17 01:08:56.072474 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-03-17 01:08:56.072477 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-03-17 01:08:56.072480 | orchestrator | 2026-03-17 01:08:56.072483 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-03-17 01:08:56.072486 | orchestrator | 2026-03-17 01:08:56.072489 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-17 01:08:56.072492 | orchestrator | Tuesday 17 March 2026 01:06:29 +0000 (0:00:00.399) 0:00:01.036 ********* 2026-03-17 01:08:56.072495 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:08:56.072499 | orchestrator | 2026-03-17 01:08:56.072502 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2026-03-17 01:08:56.072505 | orchestrator | Tuesday 17 March 2026 01:06:29 +0000 (0:00:00.513) 0:00:01.550 ********* 2026-03-17 01:08:56.072509 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2026-03-17 01:08:56.072512 | orchestrator | 2026-03-17 01:08:56.072515 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2026-03-17 01:08:56.072518 | orchestrator | Tuesday 17 March 2026 01:06:32 +0000 (0:00:03.307) 0:00:04.857 ********* 2026-03-17 01:08:56.072521 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2026-03-17 01:08:56.072565 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2026-03-17 01:08:56.072568 | orchestrator | 2026-03-17 01:08:56.072602 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2026-03-17 01:08:56.072610 | orchestrator | Tuesday 17 March 2026 01:06:38 +0000 (0:00:05.913) 0:00:10.770 ********* 2026-03-17 01:08:56.072616 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-17 01:08:56.072622 | orchestrator | 2026-03-17 01:08:56.072627 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2026-03-17 01:08:56.072632 | orchestrator | Tuesday 17 March 2026 01:06:41 +0000 (0:00:02.739) 0:00:13.510 ********* 2026-03-17 01:08:56.072637 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2026-03-17 01:08:56.072643 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-17 01:08:56.072663 | orchestrator | 2026-03-17 01:08:56.073221 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2026-03-17 01:08:56.073244 | orchestrator | Tuesday 17 March 2026 01:06:45 +0000 (0:00:03.876) 0:00:17.387 ********* 2026-03-17 01:08:56.073249 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-17 01:08:56.073253 | orchestrator | 2026-03-17 01:08:56.073257 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2026-03-17 01:08:56.073261 | orchestrator | Tuesday 17 March 2026 01:06:48 +0000 (0:00:03.052) 0:00:20.439 ********* 2026-03-17 01:08:56.073265 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2026-03-17 01:08:56.073269 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2026-03-17 01:08:56.073272 | orchestrator | 2026-03-17 01:08:56.073295 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-03-17 01:08:56.073301 | orchestrator | Tuesday 17 March 2026 01:06:55 +0000 (0:00:07.339) 0:00:27.779 ********* 2026-03-17 01:08:56.073314 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-17 01:08:56.073339 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:56.073346 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-17 01:08:56.073352 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:56.073366 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:56.073372 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:56.073381 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-17 01:08:56.073400 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:56.073407 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:56.073412 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:56.073422 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:56.073426 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:56.073430 | orchestrator | 2026-03-17 01:08:56.073434 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-17 01:08:56.073440 | orchestrator | Tuesday 17 March 2026 01:06:59 +0000 (0:00:03.874) 0:00:31.654 ********* 2026-03-17 01:08:56.073443 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:08:56.073447 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:08:56.073451 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:08:56.073454 | orchestrator | 2026-03-17 01:08:56.073458 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-17 01:08:56.073462 | orchestrator | Tuesday 17 March 2026 01:07:00 +0000 (0:00:00.336) 0:00:31.991 ********* 2026-03-17 01:08:56.073466 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:08:56.073470 | orchestrator | 2026-03-17 01:08:56.073474 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-03-17 01:08:56.073477 | orchestrator | Tuesday 17 March 2026 01:07:00 +0000 (0:00:00.546) 0:00:32.537 ********* 2026-03-17 01:08:56.073490 | orchestrator | changed: [testbed-node-0] => (item=cinder-volume) 2026-03-17 01:08:56.073505 | orchestrator | changed: [testbed-node-1] => (item=cinder-volume) 2026-03-17 01:08:56.073509 | orchestrator | changed: [testbed-node-2] => (item=cinder-volume) 2026-03-17 01:08:56.073513 | orchestrator | changed: [testbed-node-0] => (item=cinder-backup) 2026-03-17 01:08:56.073517 | orchestrator | changed: [testbed-node-1] => (item=cinder-backup) 2026-03-17 01:08:56.073520 | orchestrator | changed: [testbed-node-2] => (item=cinder-backup) 2026-03-17 01:08:56.073524 | orchestrator | 2026-03-17 01:08:56.073528 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-03-17 01:08:56.073531 | orchestrator | Tuesday 17 March 2026 01:07:02 +0000 (0:00:02.327) 0:00:34.864 ********* 2026-03-17 01:08:56.073536 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-17 01:08:56.073543 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-17 01:08:56.073547 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-17 01:08:56.073553 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-17 01:08:56.073566 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-17 01:08:56.073571 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-17 01:08:56.073579 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-17 01:08:56.073584 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-17 01:08:56.073592 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-17 01:08:56.073609 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-17 01:08:56.073616 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-17 01:08:56.073628 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-17 01:08:56.073633 | orchestrator | 2026-03-17 01:08:56.073638 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-03-17 01:08:56.073644 | orchestrator | Tuesday 17 March 2026 01:07:06 +0000 (0:00:03.167) 0:00:38.032 ********* 2026-03-17 01:08:56.073649 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-17 01:08:56.073656 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-17 01:08:56.073661 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-17 01:08:56.073666 | orchestrator | 2026-03-17 01:08:56.073671 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-03-17 01:08:56.073676 | orchestrator | Tuesday 17 March 2026 01:07:07 +0000 (0:00:01.383) 0:00:39.416 ********* 2026-03-17 01:08:56.073681 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder.keyring) 2026-03-17 01:08:56.073687 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder.keyring) 2026-03-17 01:08:56.073692 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder.keyring) 2026-03-17 01:08:56.073697 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder-backup.keyring) 2026-03-17 01:08:56.073703 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder-backup.keyring) 2026-03-17 01:08:56.073708 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder-backup.keyring) 2026-03-17 01:08:56.073713 | orchestrator | 2026-03-17 01:08:56.073719 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-03-17 01:08:56.073724 | orchestrator | Tuesday 17 March 2026 01:07:10 +0000 (0:00:02.852) 0:00:42.268 ********* 2026-03-17 01:08:56.073730 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-03-17 01:08:56.073736 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-03-17 01:08:56.073741 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-03-17 01:08:56.073746 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-03-17 01:08:56.073751 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-03-17 01:08:56.073756 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-03-17 01:08:56.073762 | orchestrator | 2026-03-17 01:08:56.073769 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-03-17 01:08:56.073774 | orchestrator | Tuesday 17 March 2026 01:07:11 +0000 (0:00:00.961) 0:00:43.230 ********* 2026-03-17 01:08:56.073783 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:08:56.073788 | orchestrator | 2026-03-17 01:08:56.073794 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-03-17 01:08:56.073799 | orchestrator | Tuesday 17 March 2026 01:07:11 +0000 (0:00:00.172) 0:00:43.403 ********* 2026-03-17 01:08:56.073804 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:08:56.073809 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:08:56.073818 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:08:56.073824 | orchestrator | 2026-03-17 01:08:56.073830 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-17 01:08:56.073836 | orchestrator | Tuesday 17 March 2026 01:07:11 +0000 (0:00:00.441) 0:00:43.844 ********* 2026-03-17 01:08:56.073842 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:08:56.073848 | orchestrator | 2026-03-17 01:08:56.073853 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-03-17 01:08:56.073879 | orchestrator | Tuesday 17 March 2026 01:07:12 +0000 (0:00:00.523) 0:00:44.368 ********* 2026-03-17 01:08:56.073887 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-17 01:08:56.073894 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-17 01:08:56.073900 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-17 01:08:56.073906 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:56.073915 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:56.073929 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:56.073935 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:56.073941 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:56.073947 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:56.073952 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:56.073964 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:56.073976 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:56.073981 | orchestrator | 2026-03-17 01:08:56.073987 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-03-17 01:08:56.073993 | orchestrator | Tuesday 17 March 2026 01:07:16 +0000 (0:00:04.030) 0:00:48.398 ********* 2026-03-17 01:08:56.073999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-17 01:08:56.074005 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 01:08:56.074011 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-17 01:08:56.074071 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-17 01:08:56.074082 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:08:56.074093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-17 01:08:56.074100 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 01:08:56.074106 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-17 01:08:56.074112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-17 01:08:56.074117 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:08:56.074123 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-17 01:08:56.074137 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 01:08:56.074146 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-17 01:08:56.074153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-17 01:08:56.074159 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:08:56.074164 | orchestrator | 2026-03-17 01:08:56.074170 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-03-17 01:08:56.074176 | orchestrator | Tuesday 17 March 2026 01:07:17 +0000 (0:00:00.883) 0:00:49.282 ********* 2026-03-17 01:08:56.074182 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-17 01:08:56.074188 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 01:08:56.074199 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-17 01:08:56.074208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-17 01:08:56.074213 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:08:56.074220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-17 01:08:56.074226 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 01:08:56.074232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-17 01:08:56.074241 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-17 01:08:56.074246 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:08:56.074255 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-17 01:08:56.074263 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 01:08:56.074269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-17 01:08:56.074289 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-17 01:08:56.074295 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:08:56.074300 | orchestrator | 2026-03-17 01:08:56.074311 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-03-17 01:08:56.074317 | orchestrator | Tuesday 17 March 2026 01:07:18 +0000 (0:00:00.817) 0:00:50.099 ********* 2026-03-17 01:08:56.074322 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-17 01:08:56.074331 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-17 01:08:56.074352 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-17 01:08:56.074358 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:56.074364 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:56.074374 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:56.074380 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:56.074389 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:56.074399 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:56.074405 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:56.074410 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:56.074419 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:56.074426 | orchestrator | 2026-03-17 01:08:56.074432 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-03-17 01:08:56.074437 | orchestrator | Tuesday 17 March 2026 01:07:22 +0000 (0:00:04.171) 0:00:54.271 ********* 2026-03-17 01:08:56.074443 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-17 01:08:56.074450 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-17 01:08:56.074455 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-17 01:08:56.074461 | orchestrator | 2026-03-17 01:08:56.074466 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-03-17 01:08:56.074472 | orchestrator | Tuesday 17 March 2026 01:07:24 +0000 (0:00:02.055) 0:00:56.326 ********* 2026-03-17 01:08:56.074484 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-17 01:08:56.074491 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-17 01:08:56.074496 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-17 01:08:56.074505 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:56.074511 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:56.074518 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:56.074528 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:56.074534 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:56.074540 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:56.074549 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:56.074555 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:56.074563 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:56.074570 | orchestrator | 2026-03-17 01:08:56.074576 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-03-17 01:08:56.074582 | orchestrator | Tuesday 17 March 2026 01:07:37 +0000 (0:00:13.515) 0:01:09.842 ********* 2026-03-17 01:08:56.074588 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:08:56.074594 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:08:56.074600 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:08:56.074605 | orchestrator | 2026-03-17 01:08:56.074611 | orchestrator | TASK [cinder : Generating 'hostid' file for cinder_volume] ********************* 2026-03-17 01:08:56.074620 | orchestrator | Tuesday 17 March 2026 01:07:39 +0000 (0:00:01.576) 0:01:11.418 ********* 2026-03-17 01:08:56.074626 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:08:56.074632 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:08:56.074638 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:08:56.074644 | orchestrator | 2026-03-17 01:08:56.074650 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-03-17 01:08:56.074655 | orchestrator | Tuesday 17 March 2026 01:07:40 +0000 (0:00:01.349) 0:01:12.768 ********* 2026-03-17 01:08:56.074662 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-17 01:08:56.074672 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 01:08:56.074677 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-17 01:08:56.074684 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-17 01:08:56.074689 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:08:56.074700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-17 01:08:56.074707 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 01:08:56.074717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-17 01:08:56.074723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-17 01:08:56.074729 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:08:56.074736 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-17 01:08:56.074744 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 01:08:56.074753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-17 01:08:56.074760 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-17 01:08:56.074770 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:08:56.074776 | orchestrator | 2026-03-17 01:08:56.074782 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-03-17 01:08:56.074788 | orchestrator | Tuesday 17 March 2026 01:07:41 +0000 (0:00:00.832) 0:01:13.600 ********* 2026-03-17 01:08:56.074794 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:08:56.074799 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:08:56.074805 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:08:56.074811 | orchestrator | 2026-03-17 01:08:56.074817 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2026-03-17 01:08:56.074824 | orchestrator | Tuesday 17 March 2026 01:07:41 +0000 (0:00:00.273) 0:01:13.874 ********* 2026-03-17 01:08:56.074830 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-17 01:08:56.074836 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-17 01:08:56.074847 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-17 01:08:56.074861 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:56.074869 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:56.074875 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:56.074880 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:56.074886 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:56.074893 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:56.074907 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:56.074913 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:56.074920 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:56.074926 | orchestrator | 2026-03-17 01:08:56.074932 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-17 01:08:56.074938 | orchestrator | Tuesday 17 March 2026 01:07:44 +0000 (0:00:02.762) 0:01:16.636 ********* 2026-03-17 01:08:56.074944 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:08:56.074950 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:08:56.074956 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:08:56.074962 | orchestrator | 2026-03-17 01:08:56.074968 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2026-03-17 01:08:56.074975 | orchestrator | Tuesday 17 March 2026 01:07:45 +0000 (0:00:00.287) 0:01:16.924 ********* 2026-03-17 01:08:56.074981 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:08:56.074986 | orchestrator | 2026-03-17 01:08:56.074992 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2026-03-17 01:08:56.074998 | orchestrator | Tuesday 17 March 2026 01:07:47 +0000 (0:00:02.079) 0:01:19.003 ********* 2026-03-17 01:08:56.075004 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:08:56.075010 | orchestrator | 2026-03-17 01:08:56.075016 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-03-17 01:08:56.075022 | orchestrator | Tuesday 17 March 2026 01:07:49 +0000 (0:00:02.460) 0:01:21.464 ********* 2026-03-17 01:08:56.075027 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:08:56.075031 | orchestrator | 2026-03-17 01:08:56.075036 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-17 01:08:56.075041 | orchestrator | Tuesday 17 March 2026 01:08:06 +0000 (0:00:17.397) 0:01:38.862 ********* 2026-03-17 01:08:56.075047 | orchestrator | 2026-03-17 01:08:56.075052 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-17 01:08:56.075058 | orchestrator | Tuesday 17 March 2026 01:08:07 +0000 (0:00:00.058) 0:01:38.920 ********* 2026-03-17 01:08:56.075067 | orchestrator | 2026-03-17 01:08:56.075071 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-17 01:08:56.075076 | orchestrator | Tuesday 17 March 2026 01:08:07 +0000 (0:00:00.057) 0:01:38.977 ********* 2026-03-17 01:08:56.075081 | orchestrator | 2026-03-17 01:08:56.075086 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-03-17 01:08:56.075091 | orchestrator | Tuesday 17 March 2026 01:08:07 +0000 (0:00:00.073) 0:01:39.051 ********* 2026-03-17 01:08:56.075096 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:08:56.075101 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:08:56.075106 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:08:56.075111 | orchestrator | 2026-03-17 01:08:56.075119 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-03-17 01:08:56.075124 | orchestrator | Tuesday 17 March 2026 01:08:22 +0000 (0:00:15.824) 0:01:54.876 ********* 2026-03-17 01:08:56.075130 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:08:56.075136 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:08:56.075140 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:08:56.075145 | orchestrator | 2026-03-17 01:08:56.075150 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-03-17 01:08:56.075155 | orchestrator | Tuesday 17 March 2026 01:08:28 +0000 (0:00:05.593) 0:02:00.470 ********* 2026-03-17 01:08:56.075160 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:08:56.075165 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:08:56.075171 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:08:56.075175 | orchestrator | 2026-03-17 01:08:56.075180 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-03-17 01:08:56.075192 | orchestrator | Tuesday 17 March 2026 01:08:47 +0000 (0:00:18.594) 0:02:19.064 ********* 2026-03-17 01:08:56.075197 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:08:56.075202 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:08:56.075207 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:08:56.075212 | orchestrator | 2026-03-17 01:08:56.075244 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-03-17 01:08:56.075252 | orchestrator | Tuesday 17 March 2026 01:08:52 +0000 (0:00:05.674) 0:02:24.739 ********* 2026-03-17 01:08:56.075257 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:08:56.075262 | orchestrator | 2026-03-17 01:08:56.075267 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 01:08:56.075272 | orchestrator | testbed-node-0 : ok=31  changed=23  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-17 01:08:56.075288 | orchestrator | testbed-node-1 : ok=22  changed=16  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-17 01:08:56.075293 | orchestrator | testbed-node-2 : ok=22  changed=16  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-17 01:08:56.075298 | orchestrator | 2026-03-17 01:08:56.075304 | orchestrator | 2026-03-17 01:08:56.075308 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 01:08:56.075314 | orchestrator | Tuesday 17 March 2026 01:08:53 +0000 (0:00:00.255) 0:02:24.995 ********* 2026-03-17 01:08:56.075319 | orchestrator | =============================================================================== 2026-03-17 01:08:56.075324 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 18.59s 2026-03-17 01:08:56.075329 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 17.40s 2026-03-17 01:08:56.075334 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 15.82s 2026-03-17 01:08:56.075339 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 13.52s 2026-03-17 01:08:56.075344 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 7.34s 2026-03-17 01:08:56.075349 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 5.91s 2026-03-17 01:08:56.075360 | orchestrator | cinder : Restart cinder-backup container -------------------------------- 5.67s 2026-03-17 01:08:56.075365 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 5.59s 2026-03-17 01:08:56.075370 | orchestrator | cinder : Copying over config.json files for services -------------------- 4.17s 2026-03-17 01:08:56.075375 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 4.03s 2026-03-17 01:08:56.075380 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 3.88s 2026-03-17 01:08:56.075385 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 3.87s 2026-03-17 01:08:56.075391 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.31s 2026-03-17 01:08:56.075399 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.17s 2026-03-17 01:08:56.075405 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.05s 2026-03-17 01:08:56.075411 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 2.85s 2026-03-17 01:08:56.075416 | orchestrator | cinder : Check cinder containers ---------------------------------------- 2.76s 2026-03-17 01:08:56.075421 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 2.74s 2026-03-17 01:08:56.075426 | orchestrator | cinder : Creating Cinder database user and setting permissions ---------- 2.46s 2026-03-17 01:08:56.075431 | orchestrator | cinder : Ensuring cinder service ceph config subdirs exists ------------- 2.33s 2026-03-17 01:08:59.125533 | orchestrator | 2026-03-17 01:08:59 | INFO  | Task ff7a6dbd-afe2-4338-b6a8-373507d5f512 is in state STARTED 2026-03-17 01:08:59.127013 | orchestrator | 2026-03-17 01:08:59 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:08:59.128818 | orchestrator | 2026-03-17 01:08:59 | INFO  | Task 1238657d-e96a-4dd3-8b3c-daf1ed1c293c is in state STARTED 2026-03-17 01:08:59.128867 | orchestrator | 2026-03-17 01:08:59 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:09:02.168843 | orchestrator | 2026-03-17 01:09:02 | INFO  | Task ff7a6dbd-afe2-4338-b6a8-373507d5f512 is in state STARTED 2026-03-17 01:09:02.170571 | orchestrator | 2026-03-17 01:09:02 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:09:02.172314 | orchestrator | 2026-03-17 01:09:02 | INFO  | Task 1238657d-e96a-4dd3-8b3c-daf1ed1c293c is in state STARTED 2026-03-17 01:09:02.172588 | orchestrator | 2026-03-17 01:09:02 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:09:05.209830 | orchestrator | 2026-03-17 01:09:05 | INFO  | Task ff7a6dbd-afe2-4338-b6a8-373507d5f512 is in state STARTED 2026-03-17 01:09:05.211120 | orchestrator | 2026-03-17 01:09:05 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:09:05.212818 | orchestrator | 2026-03-17 01:09:05 | INFO  | Task 1238657d-e96a-4dd3-8b3c-daf1ed1c293c is in state STARTED 2026-03-17 01:09:05.212956 | orchestrator | 2026-03-17 01:09:05 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:09:08.254928 | orchestrator | 2026-03-17 01:09:08 | INFO  | Task ff7a6dbd-afe2-4338-b6a8-373507d5f512 is in state STARTED 2026-03-17 01:09:08.257347 | orchestrator | 2026-03-17 01:09:08 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:09:08.259693 | orchestrator | 2026-03-17 01:09:08 | INFO  | Task 1238657d-e96a-4dd3-8b3c-daf1ed1c293c is in state STARTED 2026-03-17 01:09:08.259772 | orchestrator | 2026-03-17 01:09:08 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:09:11.293150 | orchestrator | 2026-03-17 01:09:11 | INFO  | Task ff7a6dbd-afe2-4338-b6a8-373507d5f512 is in state STARTED 2026-03-17 01:09:11.295038 | orchestrator | 2026-03-17 01:09:11 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:09:11.296870 | orchestrator | 2026-03-17 01:09:11 | INFO  | Task 1238657d-e96a-4dd3-8b3c-daf1ed1c293c is in state STARTED 2026-03-17 01:09:11.296931 | orchestrator | 2026-03-17 01:09:11 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:09:14.337800 | orchestrator | 2026-03-17 01:09:14 | INFO  | Task ff7a6dbd-afe2-4338-b6a8-373507d5f512 is in state STARTED 2026-03-17 01:09:14.338553 | orchestrator | 2026-03-17 01:09:14 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:09:14.339993 | orchestrator | 2026-03-17 01:09:14 | INFO  | Task 1238657d-e96a-4dd3-8b3c-daf1ed1c293c is in state STARTED 2026-03-17 01:09:14.340032 | orchestrator | 2026-03-17 01:09:14 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:09:17.389818 | orchestrator | 2026-03-17 01:09:17 | INFO  | Task ff7a6dbd-afe2-4338-b6a8-373507d5f512 is in state STARTED 2026-03-17 01:09:17.392071 | orchestrator | 2026-03-17 01:09:17 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:09:17.393747 | orchestrator | 2026-03-17 01:09:17 | INFO  | Task 1238657d-e96a-4dd3-8b3c-daf1ed1c293c is in state STARTED 2026-03-17 01:09:17.393784 | orchestrator | 2026-03-17 01:09:17 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:09:20.439415 | orchestrator | 2026-03-17 01:09:20 | INFO  | Task ff7a6dbd-afe2-4338-b6a8-373507d5f512 is in state STARTED 2026-03-17 01:09:20.440753 | orchestrator | 2026-03-17 01:09:20 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:09:20.442421 | orchestrator | 2026-03-17 01:09:20 | INFO  | Task 1238657d-e96a-4dd3-8b3c-daf1ed1c293c is in state STARTED 2026-03-17 01:09:20.442467 | orchestrator | 2026-03-17 01:09:20 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:09:23.482749 | orchestrator | 2026-03-17 01:09:23 | INFO  | Task ff7a6dbd-afe2-4338-b6a8-373507d5f512 is in state STARTED 2026-03-17 01:09:23.484573 | orchestrator | 2026-03-17 01:09:23 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:09:23.485768 | orchestrator | 2026-03-17 01:09:23 | INFO  | Task 1238657d-e96a-4dd3-8b3c-daf1ed1c293c is in state STARTED 2026-03-17 01:09:23.485816 | orchestrator | 2026-03-17 01:09:23 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:09:26.518503 | orchestrator | 2026-03-17 01:09:26 | INFO  | Task ff7a6dbd-afe2-4338-b6a8-373507d5f512 is in state STARTED 2026-03-17 01:09:26.521467 | orchestrator | 2026-03-17 01:09:26 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:09:26.523833 | orchestrator | 2026-03-17 01:09:26 | INFO  | Task 1238657d-e96a-4dd3-8b3c-daf1ed1c293c is in state STARTED 2026-03-17 01:09:26.523896 | orchestrator | 2026-03-17 01:09:26 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:09:29.561699 | orchestrator | 2026-03-17 01:09:29 | INFO  | Task ff7a6dbd-afe2-4338-b6a8-373507d5f512 is in state STARTED 2026-03-17 01:09:29.563183 | orchestrator | 2026-03-17 01:09:29 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:09:29.564918 | orchestrator | 2026-03-17 01:09:29 | INFO  | Task 1238657d-e96a-4dd3-8b3c-daf1ed1c293c is in state STARTED 2026-03-17 01:09:29.564987 | orchestrator | 2026-03-17 01:09:29 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:09:32.606386 | orchestrator | 2026-03-17 01:09:32 | INFO  | Task ff7a6dbd-afe2-4338-b6a8-373507d5f512 is in state STARTED 2026-03-17 01:09:32.608472 | orchestrator | 2026-03-17 01:09:32 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:09:32.609688 | orchestrator | 2026-03-17 01:09:32 | INFO  | Task 1238657d-e96a-4dd3-8b3c-daf1ed1c293c is in state STARTED 2026-03-17 01:09:32.609735 | orchestrator | 2026-03-17 01:09:32 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:09:35.644091 | orchestrator | 2026-03-17 01:09:35 | INFO  | Task ff7a6dbd-afe2-4338-b6a8-373507d5f512 is in state STARTED 2026-03-17 01:09:35.644793 | orchestrator | 2026-03-17 01:09:35 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:09:35.645934 | orchestrator | 2026-03-17 01:09:35 | INFO  | Task 1238657d-e96a-4dd3-8b3c-daf1ed1c293c is in state STARTED 2026-03-17 01:09:35.646132 | orchestrator | 2026-03-17 01:09:35 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:09:38.688679 | orchestrator | 2026-03-17 01:09:38 | INFO  | Task ff7a6dbd-afe2-4338-b6a8-373507d5f512 is in state STARTED 2026-03-17 01:09:38.690708 | orchestrator | 2026-03-17 01:09:38 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:09:38.692524 | orchestrator | 2026-03-17 01:09:38 | INFO  | Task 1238657d-e96a-4dd3-8b3c-daf1ed1c293c is in state STARTED 2026-03-17 01:09:38.692569 | orchestrator | 2026-03-17 01:09:38 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:09:41.734328 | orchestrator | 2026-03-17 01:09:41 | INFO  | Task ff7a6dbd-afe2-4338-b6a8-373507d5f512 is in state STARTED 2026-03-17 01:09:41.736622 | orchestrator | 2026-03-17 01:09:41 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:09:41.737198 | orchestrator | 2026-03-17 01:09:41 | INFO  | Task 1238657d-e96a-4dd3-8b3c-daf1ed1c293c is in state STARTED 2026-03-17 01:09:41.737250 | orchestrator | 2026-03-17 01:09:41 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:09:44.775911 | orchestrator | 2026-03-17 01:09:44 | INFO  | Task ff7a6dbd-afe2-4338-b6a8-373507d5f512 is in state STARTED 2026-03-17 01:09:44.777841 | orchestrator | 2026-03-17 01:09:44 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:09:44.779699 | orchestrator | 2026-03-17 01:09:44 | INFO  | Task 1238657d-e96a-4dd3-8b3c-daf1ed1c293c is in state STARTED 2026-03-17 01:09:44.779764 | orchestrator | 2026-03-17 01:09:44 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:09:47.827110 | orchestrator | 2026-03-17 01:09:47 | INFO  | Task ff7a6dbd-afe2-4338-b6a8-373507d5f512 is in state STARTED 2026-03-17 01:09:47.828596 | orchestrator | 2026-03-17 01:09:47 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:09:47.830850 | orchestrator | 2026-03-17 01:09:47 | INFO  | Task 1238657d-e96a-4dd3-8b3c-daf1ed1c293c is in state STARTED 2026-03-17 01:09:47.830887 | orchestrator | 2026-03-17 01:09:47 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:09:50.872060 | orchestrator | 2026-03-17 01:09:50 | INFO  | Task ff7a6dbd-afe2-4338-b6a8-373507d5f512 is in state STARTED 2026-03-17 01:09:50.873928 | orchestrator | 2026-03-17 01:09:50 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:09:50.876130 | orchestrator | 2026-03-17 01:09:50 | INFO  | Task 1238657d-e96a-4dd3-8b3c-daf1ed1c293c is in state STARTED 2026-03-17 01:09:50.876183 | orchestrator | 2026-03-17 01:09:50 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:09:53.919630 | orchestrator | 2026-03-17 01:09:53 | INFO  | Task ff7a6dbd-afe2-4338-b6a8-373507d5f512 is in state STARTED 2026-03-17 01:09:53.921233 | orchestrator | 2026-03-17 01:09:53 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:09:53.922933 | orchestrator | 2026-03-17 01:09:53 | INFO  | Task 1238657d-e96a-4dd3-8b3c-daf1ed1c293c is in state STARTED 2026-03-17 01:09:53.922990 | orchestrator | 2026-03-17 01:09:53 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:09:56.972673 | orchestrator | 2026-03-17 01:09:56 | INFO  | Task ff7a6dbd-afe2-4338-b6a8-373507d5f512 is in state STARTED 2026-03-17 01:09:56.974572 | orchestrator | 2026-03-17 01:09:56 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:09:56.977086 | orchestrator | 2026-03-17 01:09:56 | INFO  | Task 1238657d-e96a-4dd3-8b3c-daf1ed1c293c is in state STARTED 2026-03-17 01:09:56.977140 | orchestrator | 2026-03-17 01:09:56 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:10:00.012115 | orchestrator | 2026-03-17 01:10:00 | INFO  | Task ff7a6dbd-afe2-4338-b6a8-373507d5f512 is in state STARTED 2026-03-17 01:10:00.012544 | orchestrator | 2026-03-17 01:10:00 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:10:00.013151 | orchestrator | 2026-03-17 01:10:00 | INFO  | Task 1238657d-e96a-4dd3-8b3c-daf1ed1c293c is in state STARTED 2026-03-17 01:10:00.013234 | orchestrator | 2026-03-17 01:10:00 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:10:03.046108 | orchestrator | 2026-03-17 01:10:03 | INFO  | Task ff7a6dbd-afe2-4338-b6a8-373507d5f512 is in state STARTED 2026-03-17 01:10:03.048930 | orchestrator | 2026-03-17 01:10:03 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:10:03.051556 | orchestrator | 2026-03-17 01:10:03 | INFO  | Task 1238657d-e96a-4dd3-8b3c-daf1ed1c293c is in state SUCCESS 2026-03-17 01:10:03.052718 | orchestrator | 2026-03-17 01:10:03.052756 | orchestrator | 2026-03-17 01:10:03.052761 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-17 01:10:03.052765 | orchestrator | 2026-03-17 01:10:03.052768 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-17 01:10:03.052772 | orchestrator | Tuesday 17 March 2026 01:08:12 +0000 (0:00:00.281) 0:00:00.281 ********* 2026-03-17 01:10:03.052775 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:10:03.052779 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:10:03.052782 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:10:03.052786 | orchestrator | 2026-03-17 01:10:03.052789 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-17 01:10:03.052792 | orchestrator | Tuesday 17 March 2026 01:08:13 +0000 (0:00:00.246) 0:00:00.527 ********* 2026-03-17 01:10:03.052795 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-03-17 01:10:03.052799 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-03-17 01:10:03.052802 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-03-17 01:10:03.052806 | orchestrator | 2026-03-17 01:10:03.052809 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-03-17 01:10:03.052812 | orchestrator | 2026-03-17 01:10:03.052815 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-03-17 01:10:03.052819 | orchestrator | Tuesday 17 March 2026 01:08:13 +0000 (0:00:00.251) 0:00:00.779 ********* 2026-03-17 01:10:03.052822 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:10:03.052825 | orchestrator | 2026-03-17 01:10:03.052828 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-03-17 01:10:03.052832 | orchestrator | Tuesday 17 March 2026 01:08:13 +0000 (0:00:00.422) 0:00:01.202 ********* 2026-03-17 01:10:03.052847 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-17 01:10:03.052869 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-17 01:10:03.052882 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-17 01:10:03.052888 | orchestrator | 2026-03-17 01:10:03.052893 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-03-17 01:10:03.052898 | orchestrator | Tuesday 17 March 2026 01:08:14 +0000 (0:00:01.031) 0:00:02.233 ********* 2026-03-17 01:10:03.052903 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2026-03-17 01:10:03.052909 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2026-03-17 01:10:03.052914 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-17 01:10:03.052919 | orchestrator | 2026-03-17 01:10:03.052922 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-03-17 01:10:03.052925 | orchestrator | Tuesday 17 March 2026 01:08:15 +0000 (0:00:00.779) 0:00:03.012 ********* 2026-03-17 01:10:03.052928 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:10:03.052932 | orchestrator | 2026-03-17 01:10:03.052935 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-03-17 01:10:03.052938 | orchestrator | Tuesday 17 March 2026 01:08:15 +0000 (0:00:00.471) 0:00:03.484 ********* 2026-03-17 01:10:03.052948 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-17 01:10:03.052952 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-17 01:10:03.052958 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-17 01:10:03.052962 | orchestrator | 2026-03-17 01:10:03.052965 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-03-17 01:10:03.052968 | orchestrator | Tuesday 17 March 2026 01:08:17 +0000 (0:00:01.290) 0:00:04.774 ********* 2026-03-17 01:10:03.052971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-17 01:10:03.052974 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:10:03.052980 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-17 01:10:03.052983 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:10:03.052988 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-17 01:10:03.052992 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:10:03.052995 | orchestrator | 2026-03-17 01:10:03.052998 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-03-17 01:10:03.053001 | orchestrator | Tuesday 17 March 2026 01:08:17 +0000 (0:00:00.420) 0:00:05.194 ********* 2026-03-17 01:10:03.053004 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-17 01:10:03.053010 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:10:03.053014 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-17 01:10:03.053017 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:10:03.053020 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-17 01:10:03.053023 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:10:03.053026 | orchestrator | 2026-03-17 01:10:03.053029 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-03-17 01:10:03.053033 | orchestrator | Tuesday 17 March 2026 01:08:18 +0000 (0:00:00.554) 0:00:05.749 ********* 2026-03-17 01:10:03.053037 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-17 01:10:03.053041 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-17 01:10:03.053047 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-17 01:10:03.053053 | orchestrator | 2026-03-17 01:10:03.053056 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-03-17 01:10:03.053059 | orchestrator | Tuesday 17 March 2026 01:08:19 +0000 (0:00:01.298) 0:00:07.047 ********* 2026-03-17 01:10:03.053062 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-17 01:10:03.053066 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-17 01:10:03.053069 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-17 01:10:03.053072 | orchestrator | 2026-03-17 01:10:03.053075 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-03-17 01:10:03.053078 | orchestrator | Tuesday 17 March 2026 01:08:20 +0000 (0:00:01.173) 0:00:08.221 ********* 2026-03-17 01:10:03.053122 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:10:03.053126 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:10:03.053129 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:10:03.053132 | orchestrator | 2026-03-17 01:10:03.053135 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-03-17 01:10:03.053140 | orchestrator | Tuesday 17 March 2026 01:08:20 +0000 (0:00:00.289) 0:00:08.510 ********* 2026-03-17 01:10:03.053143 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-17 01:10:03.053147 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-17 01:10:03.053150 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-17 01:10:03.053153 | orchestrator | 2026-03-17 01:10:03.053156 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-03-17 01:10:03.053159 | orchestrator | Tuesday 17 March 2026 01:08:22 +0000 (0:00:01.110) 0:00:09.621 ********* 2026-03-17 01:10:03.053167 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-17 01:10:03.053170 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-17 01:10:03.053222 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-17 01:10:03.053229 | orchestrator | 2026-03-17 01:10:03.053235 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2026-03-17 01:10:03.053242 | orchestrator | Tuesday 17 March 2026 01:08:23 +0000 (0:00:01.171) 0:00:10.792 ********* 2026-03-17 01:10:03.053251 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-17 01:10:03.053256 | orchestrator | 2026-03-17 01:10:03.053261 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2026-03-17 01:10:03.053266 | orchestrator | Tuesday 17 March 2026 01:08:24 +0000 (0:00:01.599) 0:00:12.392 ********* 2026-03-17 01:10:03.053271 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2026-03-17 01:10:03.053277 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2026-03-17 01:10:03.053282 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:10:03.053288 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:10:03.053293 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:10:03.053298 | orchestrator | 2026-03-17 01:10:03.053303 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2026-03-17 01:10:03.053308 | orchestrator | Tuesday 17 March 2026 01:08:25 +0000 (0:00:00.808) 0:00:13.201 ********* 2026-03-17 01:10:03.053313 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:10:03.053318 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:10:03.053324 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:10:03.053329 | orchestrator | 2026-03-17 01:10:03.053334 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-03-17 01:10:03.053341 | orchestrator | Tuesday 17 March 2026 01:08:25 +0000 (0:00:00.306) 0:00:13.507 ********* 2026-03-17 01:10:03.053347 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 121701, 'inode': 2106867, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.6801584, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.053354 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 121701, 'inode': 2106867, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.6801584, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.053360 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 121701, 'inode': 2106867, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.6801584, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.053370 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfsdashboard.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfsdashboard.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 143913, 'inode': 2106874, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.684958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.053384 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfsdashboard.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfsdashboard.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 143913, 'inode': 2106874, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.684958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.053391 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfsdashboard.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfsdashboard.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 143913, 'inode': 2106874, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.684958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.053397 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26019, 'inode': 2106890, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.6916993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.053403 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26019, 'inode': 2106890, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.6916993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.053408 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26019, 'inode': 2106890, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.6916993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.053417 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 2106872, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.6829581, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.053427 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 2106872, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.6829581, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.053438 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 2106872, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.6829581, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.053444 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 170293, 'inode': 2106892, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.692958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.053450 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 170293, 'inode': 2106892, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.692958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.053456 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 170293, 'inode': 2106892, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.692958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.053464 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-nvmeof-performance.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof-performance.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 33297, 'inode': 2106869, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.681547, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.053476 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-nvmeof-performance.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof-performance.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 33297, 'inode': 2106869, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.681547, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.053485 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-nvmeof-performance.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof-performance.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 33297, 'inode': 2106869, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.681547, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.053489 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26346, 'inode': 2106878, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.686958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.053493 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26346, 'inode': 2106878, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.686958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.053497 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26346, 'inode': 2106878, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.686958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.053501 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 46110, 'inode': 2106884, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.689958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.053509 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 46110, 'inode': 2106884, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.689958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.053758 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 46110, 'inode': 2106884, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.689958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.053770 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 2106866, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.6799612, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.053774 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 2106866, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.6799612, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.053777 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 2106866, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.6799612, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.053780 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 2106868, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.681096, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.053791 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 2106868, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.681096, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.053794 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 2106868, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.681096, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.053801 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 2106873, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.683958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.053805 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 2106873, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.683958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.053808 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 2106873, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.683958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.053811 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19231, 'inode': 2106880, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.687958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.053817 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19231, 'inode': 2106880, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.687958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.053822 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19231, 'inode': 2106880, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.687958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.053827 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13320, 'inode': 2106888, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.690958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.053831 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13320, 'inode': 2106888, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.690958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.053834 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13320, 'inode': 2106888, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.690958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.053837 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 2106871, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.6829581, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.053843 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 2106871, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.6829581, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.053848 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 2106871, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.6829581, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.053851 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 20042, 'inode': 2106883, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.6891968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.053856 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 20042, 'inode': 2106883, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.6891968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.053860 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 20042, 'inode': 2106883, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.6891968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.053863 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/smb-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/smb-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29877, 'inode': 2106895, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.6937492, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.053866 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/smb-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/smb-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29877, 'inode': 2106895, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.6937492, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.053875 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/smb-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/smb-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29877, 'inode': 2106895, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.6937492, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.053878 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38375, 'inode': 2106879, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.6878622, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.053884 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38375, 'inode': 2106879, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.6878622, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.053887 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38375, 'inode': 2106879, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.6878622, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.053890 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 63043, 'inode': 2106877, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.686958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.053894 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 63043, 'inode': 2106877, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.686958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.053899 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 63043, 'inode': 2106877, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.686958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.053904 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27387, 'inode': 2106876, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.6858962, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.053909 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27387, 'inode': 2106876, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.6858962, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.053912 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27387, 'inode': 2106876, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.6858962, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.053915 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49016, 'inode': 2106881, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.688587, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.053919 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49016, 'inode': 2106881, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.688587, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.053925 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49016, 'inode': 2106881, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.688587, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.053932 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 43303, 'inode': 2106875, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.684958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.053942 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 43303, 'inode': 2106875, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.684958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.053951 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 43303, 'inode': 2106875, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.684958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.053957 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16614, 'inode': 2106887, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.690958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.053962 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16614, 'inode': 2106887, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.690958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.053972 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16614, 'inode': 2106887, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.690958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.053980 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-nvmeof.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 52667, 'inode': 2106870, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.681958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.053985 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-nvmeof.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 52667, 'inode': 2106870, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.681958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.053993 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-nvmeof.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 52667, 'inode': 2106870, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.681958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.053998 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 2106923, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7111993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.054003 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 2106923, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7111993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.054040 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 2106923, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7111993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.054046 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 2106908, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.6989582, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.054054 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 2106908, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.6989582, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.054059 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 2106908, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.6989582, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.054486 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 2106904, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.6949582, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.054509 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 2106904, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.6949582, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.054524 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 2106904, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.6949582, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.054530 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15767, 'inode': 2106912, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7013402, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.054540 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15767, 'inode': 2106912, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7013402, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.054545 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15767, 'inode': 2106912, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7013402, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.054556 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 2106899, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.6939583, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.054562 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 2106899, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.6939583, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.054571 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 2106899, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.6939583, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.054577 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 2106916, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7063813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.054584 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 2106916, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7063813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.054590 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 2106916, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7063813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.054600 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 2106913, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7049584, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.054608 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 2106913, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7049584, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.054618 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 2106913, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7049584, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.054624 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22303, 'inode': 2106917, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.707043, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.054630 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22303, 'inode': 2106917, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.707043, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.054647 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22303, 'inode': 2106917, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.707043, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.054663 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 2106921, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7099583, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.054669 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 2106921, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7099583, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.054678 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 2106921, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7099583, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.054684 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21194, 'inode': 2106915, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7061303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.054709 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21194, 'inode': 2106915, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7061303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.054718 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21194, 'inode': 2106915, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7061303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.054725 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 2106910, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7007055, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.054729 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 2106910, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7007055, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.054737 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 2106910, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7007055, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.054743 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 2106906, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.6969583, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.054748 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 2106906, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.6969583, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.054755 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 2106906, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.6969583, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.054761 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 2106909, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.6999583, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.054770 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 2106909, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.6999583, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.054778 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 2106909, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.6999583, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.054784 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 2106905, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.6960063, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.054789 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 2106905, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.6960063, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.054799 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 2106905, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.6960063, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.054804 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15957, 'inode': 2106911, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7010443, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.054813 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15957, 'inode': 2106911, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7010443, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.054821 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15957, 'inode': 2106911, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7010443, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.054826 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 2106920, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7099583, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.054831 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 2106920, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7099583, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.054839 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 2106920, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7099583, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.054844 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 2106919, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7079585, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.054853 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 2106919, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7079585, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.054862 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 2106919, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7079585, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.054868 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 2106902, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.6944695, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.054873 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 2106902, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.6944695, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.054879 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 2106902, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.6944695, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.054887 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 2106903, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.6949582, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.054899 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 2106903, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.6949582, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.054905 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 2106903, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.6949582, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.054911 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 2106914, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7049584, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.054916 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 2106914, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7049584, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.054922 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 2106914, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7049584, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.054929 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21951, 'inode': 2106918, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7074838, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.054942 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21951, 'inode': 2106918, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7074838, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.054947 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21951, 'inode': 2106918, 'dev': 150, 'nlink': 1, 'atime': 1773705743.0, 'mtime': 1773705743.0, 'ctime': 1773708109.7074838, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:03.054952 | orchestrator | 2026-03-17 01:10:03.054958 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2026-03-17 01:10:03.054964 | orchestrator | Tuesday 17 March 2026 01:09:03 +0000 (0:00:37.313) 0:00:50.820 ********* 2026-03-17 01:10:03.054969 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-17 01:10:03.054975 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-17 01:10:03.054980 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-17 01:10:03.054985 | orchestrator | 2026-03-17 01:10:03.054993 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-03-17 01:10:03.054998 | orchestrator | Tuesday 17 March 2026 01:09:04 +0000 (0:00:01.034) 0:00:51.855 ********* 2026-03-17 01:10:03.055003 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:10:03.055013 | orchestrator | 2026-03-17 01:10:03.055019 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-03-17 01:10:03.055024 | orchestrator | Tuesday 17 March 2026 01:09:06 +0000 (0:00:02.295) 0:00:54.150 ********* 2026-03-17 01:10:03.055027 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:10:03.055033 | orchestrator | 2026-03-17 01:10:03.055038 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-17 01:10:03.055043 | orchestrator | Tuesday 17 March 2026 01:09:09 +0000 (0:00:02.702) 0:00:56.853 ********* 2026-03-17 01:10:03.055048 | orchestrator | 2026-03-17 01:10:03.055053 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-17 01:10:03.055058 | orchestrator | Tuesday 17 March 2026 01:09:09 +0000 (0:00:00.061) 0:00:56.915 ********* 2026-03-17 01:10:03.055063 | orchestrator | 2026-03-17 01:10:03.055069 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-17 01:10:03.055074 | orchestrator | Tuesday 17 March 2026 01:09:09 +0000 (0:00:00.059) 0:00:56.975 ********* 2026-03-17 01:10:03.055079 | orchestrator | 2026-03-17 01:10:03.055084 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-03-17 01:10:03.055090 | orchestrator | Tuesday 17 March 2026 01:09:09 +0000 (0:00:00.059) 0:00:57.034 ********* 2026-03-17 01:10:03.055095 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:10:03.055100 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:10:03.055109 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:10:03.055115 | orchestrator | 2026-03-17 01:10:03.055120 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-03-17 01:10:03.055126 | orchestrator | Tuesday 17 March 2026 01:09:16 +0000 (0:00:07.300) 0:01:04.335 ********* 2026-03-17 01:10:03.055131 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:10:03.055136 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:10:03.055141 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-03-17 01:10:03.055147 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:10:03.055154 | orchestrator | 2026-03-17 01:10:03.055159 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-03-17 01:10:03.055164 | orchestrator | Tuesday 17 March 2026 01:09:31 +0000 (0:00:15.036) 0:01:19.372 ********* 2026-03-17 01:10:03.055170 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:10:03.055188 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:10:03.055194 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:10:03.055199 | orchestrator | 2026-03-17 01:10:03.055205 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-03-17 01:10:03.055211 | orchestrator | Tuesday 17 March 2026 01:09:56 +0000 (0:00:24.769) 0:01:44.142 ********* 2026-03-17 01:10:03.055217 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:10:03.055223 | orchestrator | 2026-03-17 01:10:03.055228 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-03-17 01:10:03.055233 | orchestrator | Tuesday 17 March 2026 01:09:58 +0000 (0:00:02.068) 0:01:46.210 ********* 2026-03-17 01:10:03.055238 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:10:03.055244 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:10:03.055249 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:10:03.055254 | orchestrator | 2026-03-17 01:10:03.055260 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-03-17 01:10:03.055266 | orchestrator | Tuesday 17 March 2026 01:09:58 +0000 (0:00:00.265) 0:01:46.476 ********* 2026-03-17 01:10:03.055272 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-03-17 01:10:03.055278 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-03-17 01:10:03.055288 | orchestrator | 2026-03-17 01:10:03.055293 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-03-17 01:10:03.055299 | orchestrator | Tuesday 17 March 2026 01:10:01 +0000 (0:00:02.131) 0:01:48.607 ********* 2026-03-17 01:10:03.055304 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:10:03.055309 | orchestrator | 2026-03-17 01:10:03.055314 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 01:10:03.055320 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-17 01:10:03.055327 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-17 01:10:03.055332 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-17 01:10:03.055337 | orchestrator | 2026-03-17 01:10:03.055343 | orchestrator | 2026-03-17 01:10:03.055348 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 01:10:03.055353 | orchestrator | Tuesday 17 March 2026 01:10:01 +0000 (0:00:00.209) 0:01:48.817 ********* 2026-03-17 01:10:03.055358 | orchestrator | =============================================================================== 2026-03-17 01:10:03.055366 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 37.31s 2026-03-17 01:10:03.055372 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 24.77s 2026-03-17 01:10:03.055377 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 15.04s 2026-03-17 01:10:03.055382 | orchestrator | grafana : Restart first grafana container ------------------------------- 7.30s 2026-03-17 01:10:03.055388 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.70s 2026-03-17 01:10:03.055393 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.30s 2026-03-17 01:10:03.055399 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.13s 2026-03-17 01:10:03.055404 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.07s 2026-03-17 01:10:03.055409 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 1.60s 2026-03-17 01:10:03.055414 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.30s 2026-03-17 01:10:03.055421 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.29s 2026-03-17 01:10:03.055426 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.17s 2026-03-17 01:10:03.055432 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.17s 2026-03-17 01:10:03.055442 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.11s 2026-03-17 01:10:03.055448 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.03s 2026-03-17 01:10:03.055461 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 1.03s 2026-03-17 01:10:03.055472 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.81s 2026-03-17 01:10:03.055477 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.78s 2026-03-17 01:10:03.055483 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.55s 2026-03-17 01:10:03.055488 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.47s 2026-03-17 01:10:03.055494 | orchestrator | 2026-03-17 01:10:03 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:10:06.095954 | orchestrator | 2026-03-17 01:10:06 | INFO  | Task ff7a6dbd-afe2-4338-b6a8-373507d5f512 is in state STARTED 2026-03-17 01:10:06.097732 | orchestrator | 2026-03-17 01:10:06 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:10:06.097838 | orchestrator | 2026-03-17 01:10:06 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:10:09.136506 | orchestrator | 2026-03-17 01:10:09 | INFO  | Task ff7a6dbd-afe2-4338-b6a8-373507d5f512 is in state STARTED 2026-03-17 01:10:09.139264 | orchestrator | 2026-03-17 01:10:09 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:10:09.139307 | orchestrator | 2026-03-17 01:10:09 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:10:12.175616 | orchestrator | 2026-03-17 01:10:12 | INFO  | Task ff7a6dbd-afe2-4338-b6a8-373507d5f512 is in state STARTED 2026-03-17 01:10:12.178993 | orchestrator | 2026-03-17 01:10:12 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:10:12.179043 | orchestrator | 2026-03-17 01:10:12 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:10:15.215405 | orchestrator | 2026-03-17 01:10:15 | INFO  | Task ff7a6dbd-afe2-4338-b6a8-373507d5f512 is in state STARTED 2026-03-17 01:10:15.216270 | orchestrator | 2026-03-17 01:10:15 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:10:15.216316 | orchestrator | 2026-03-17 01:10:15 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:10:18.254196 | orchestrator | 2026-03-17 01:10:18 | INFO  | Task ff7a6dbd-afe2-4338-b6a8-373507d5f512 is in state STARTED 2026-03-17 01:10:18.256268 | orchestrator | 2026-03-17 01:10:18 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:10:18.256307 | orchestrator | 2026-03-17 01:10:18 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:10:21.313032 | orchestrator | 2026-03-17 01:10:21 | INFO  | Task ff7a6dbd-afe2-4338-b6a8-373507d5f512 is in state STARTED 2026-03-17 01:10:21.314405 | orchestrator | 2026-03-17 01:10:21 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:10:21.314583 | orchestrator | 2026-03-17 01:10:21 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:10:24.363591 | orchestrator | 2026-03-17 01:10:24 | INFO  | Task ff7a6dbd-afe2-4338-b6a8-373507d5f512 is in state STARTED 2026-03-17 01:10:24.365028 | orchestrator | 2026-03-17 01:10:24 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:10:24.365076 | orchestrator | 2026-03-17 01:10:24 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:10:27.407368 | orchestrator | 2026-03-17 01:10:27 | INFO  | Task ff7a6dbd-afe2-4338-b6a8-373507d5f512 is in state STARTED 2026-03-17 01:10:27.409387 | orchestrator | 2026-03-17 01:10:27 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:10:27.409489 | orchestrator | 2026-03-17 01:10:27 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:10:30.447339 | orchestrator | 2026-03-17 01:10:30 | INFO  | Task ff7a6dbd-afe2-4338-b6a8-373507d5f512 is in state STARTED 2026-03-17 01:10:30.448907 | orchestrator | 2026-03-17 01:10:30 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:10:30.448969 | orchestrator | 2026-03-17 01:10:30 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:10:33.485693 | orchestrator | 2026-03-17 01:10:33 | INFO  | Task ff7a6dbd-afe2-4338-b6a8-373507d5f512 is in state STARTED 2026-03-17 01:10:33.485739 | orchestrator | 2026-03-17 01:10:33 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:10:33.485745 | orchestrator | 2026-03-17 01:10:33 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:10:36.530037 | orchestrator | 2026-03-17 01:10:36 | INFO  | Task ff7a6dbd-afe2-4338-b6a8-373507d5f512 is in state SUCCESS 2026-03-17 01:10:36.531378 | orchestrator | 2026-03-17 01:10:36 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:10:36.531412 | orchestrator | 2026-03-17 01:10:36 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:10:39.569814 | orchestrator | 2026-03-17 01:10:39 | INFO  | Task a92a785e-92a1-4140-bdb5-3e07a55a14db is in state STARTED 2026-03-17 01:10:39.570895 | orchestrator | 2026-03-17 01:10:39 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:10:39.570969 | orchestrator | 2026-03-17 01:10:39 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:10:42.611393 | orchestrator | 2026-03-17 01:10:42 | INFO  | Task a92a785e-92a1-4140-bdb5-3e07a55a14db is in state STARTED 2026-03-17 01:10:42.613902 | orchestrator | 2026-03-17 01:10:42 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:10:42.613986 | orchestrator | 2026-03-17 01:10:42 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:10:45.651375 | orchestrator | 2026-03-17 01:10:45 | INFO  | Task a92a785e-92a1-4140-bdb5-3e07a55a14db is in state STARTED 2026-03-17 01:10:45.653687 | orchestrator | 2026-03-17 01:10:45 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:10:45.655533 | orchestrator | 2026-03-17 01:10:45 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:10:48.688168 | orchestrator | 2026-03-17 01:10:48 | INFO  | Task a92a785e-92a1-4140-bdb5-3e07a55a14db is in state STARTED 2026-03-17 01:10:48.689304 | orchestrator | 2026-03-17 01:10:48 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:10:48.689353 | orchestrator | 2026-03-17 01:10:48 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:10:51.724816 | orchestrator | 2026-03-17 01:10:51 | INFO  | Task a92a785e-92a1-4140-bdb5-3e07a55a14db is in state STARTED 2026-03-17 01:10:51.726389 | orchestrator | 2026-03-17 01:10:51 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:10:51.726438 | orchestrator | 2026-03-17 01:10:51 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:10:54.769711 | orchestrator | 2026-03-17 01:10:54 | INFO  | Task a92a785e-92a1-4140-bdb5-3e07a55a14db is in state STARTED 2026-03-17 01:10:54.771224 | orchestrator | 2026-03-17 01:10:54 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:10:54.771273 | orchestrator | 2026-03-17 01:10:54 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:10:57.807602 | orchestrator | 2026-03-17 01:10:57 | INFO  | Task a92a785e-92a1-4140-bdb5-3e07a55a14db is in state STARTED 2026-03-17 01:10:57.808751 | orchestrator | 2026-03-17 01:10:57 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:10:57.808777 | orchestrator | 2026-03-17 01:10:57 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:11:00.861274 | orchestrator | 2026-03-17 01:11:00 | INFO  | Task a92a785e-92a1-4140-bdb5-3e07a55a14db is in state STARTED 2026-03-17 01:11:00.863733 | orchestrator | 2026-03-17 01:11:00 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:11:00.863934 | orchestrator | 2026-03-17 01:11:00 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:11:03.908796 | orchestrator | 2026-03-17 01:11:03 | INFO  | Task a92a785e-92a1-4140-bdb5-3e07a55a14db is in state STARTED 2026-03-17 01:11:03.911077 | orchestrator | 2026-03-17 01:11:03 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:11:03.911177 | orchestrator | 2026-03-17 01:11:03 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:11:06.957299 | orchestrator | 2026-03-17 01:11:06 | INFO  | Task a92a785e-92a1-4140-bdb5-3e07a55a14db is in state STARTED 2026-03-17 01:11:06.958001 | orchestrator | 2026-03-17 01:11:06 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:11:06.958695 | orchestrator | 2026-03-17 01:11:06 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:11:10.000271 | orchestrator | 2026-03-17 01:11:09 | INFO  | Task a92a785e-92a1-4140-bdb5-3e07a55a14db is in state STARTED 2026-03-17 01:11:10.000333 | orchestrator | 2026-03-17 01:11:10 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:11:10.000344 | orchestrator | 2026-03-17 01:11:10 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:11:13.048623 | orchestrator | 2026-03-17 01:11:13 | INFO  | Task a92a785e-92a1-4140-bdb5-3e07a55a14db is in state STARTED 2026-03-17 01:11:13.050121 | orchestrator | 2026-03-17 01:11:13 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:11:13.051600 | orchestrator | 2026-03-17 01:11:13 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:11:16.094682 | orchestrator | 2026-03-17 01:11:16 | INFO  | Task a92a785e-92a1-4140-bdb5-3e07a55a14db is in state STARTED 2026-03-17 01:11:16.094767 | orchestrator | 2026-03-17 01:11:16 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:11:16.094778 | orchestrator | 2026-03-17 01:11:16 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:11:19.131198 | orchestrator | 2026-03-17 01:11:19 | INFO  | Task a92a785e-92a1-4140-bdb5-3e07a55a14db is in state STARTED 2026-03-17 01:11:19.136501 | orchestrator | 2026-03-17 01:11:19 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:11:19.137548 | orchestrator | 2026-03-17 01:11:19 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:11:22.197645 | orchestrator | 2026-03-17 01:11:22 | INFO  | Task a92a785e-92a1-4140-bdb5-3e07a55a14db is in state STARTED 2026-03-17 01:11:22.197689 | orchestrator | 2026-03-17 01:11:22 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:11:22.197694 | orchestrator | 2026-03-17 01:11:22 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:11:25.243091 | orchestrator | 2026-03-17 01:11:25 | INFO  | Task a92a785e-92a1-4140-bdb5-3e07a55a14db is in state STARTED 2026-03-17 01:11:25.243406 | orchestrator | 2026-03-17 01:11:25 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:11:25.245431 | orchestrator | 2026-03-17 01:11:25 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:11:28.280643 | orchestrator | 2026-03-17 01:11:28 | INFO  | Task a92a785e-92a1-4140-bdb5-3e07a55a14db is in state STARTED 2026-03-17 01:11:28.283172 | orchestrator | 2026-03-17 01:11:28 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:11:28.283267 | orchestrator | 2026-03-17 01:11:28 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:11:31.344369 | orchestrator | 2026-03-17 01:11:31 | INFO  | Task a92a785e-92a1-4140-bdb5-3e07a55a14db is in state STARTED 2026-03-17 01:11:31.345729 | orchestrator | 2026-03-17 01:11:31 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:11:31.345783 | orchestrator | 2026-03-17 01:11:31 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:11:34.378250 | orchestrator | 2026-03-17 01:11:34 | INFO  | Task a92a785e-92a1-4140-bdb5-3e07a55a14db is in state STARTED 2026-03-17 01:11:34.379998 | orchestrator | 2026-03-17 01:11:34 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:11:34.380068 | orchestrator | 2026-03-17 01:11:34 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:11:37.426184 | orchestrator | 2026-03-17 01:11:37 | INFO  | Task a92a785e-92a1-4140-bdb5-3e07a55a14db is in state STARTED 2026-03-17 01:11:37.426271 | orchestrator | 2026-03-17 01:11:37 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:11:37.426279 | orchestrator | 2026-03-17 01:11:37 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:11:40.466227 | orchestrator | 2026-03-17 01:11:40 | INFO  | Task a92a785e-92a1-4140-bdb5-3e07a55a14db is in state STARTED 2026-03-17 01:11:40.469892 | orchestrator | 2026-03-17 01:11:40 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:11:40.469983 | orchestrator | 2026-03-17 01:11:40 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:11:43.503321 | orchestrator | 2026-03-17 01:11:43 | INFO  | Task a92a785e-92a1-4140-bdb5-3e07a55a14db is in state STARTED 2026-03-17 01:11:43.505473 | orchestrator | 2026-03-17 01:11:43 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:11:43.505508 | orchestrator | 2026-03-17 01:11:43 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:11:46.537298 | orchestrator | 2026-03-17 01:11:46 | INFO  | Task a92a785e-92a1-4140-bdb5-3e07a55a14db is in state STARTED 2026-03-17 01:11:46.540385 | orchestrator | 2026-03-17 01:11:46 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:11:46.540435 | orchestrator | 2026-03-17 01:11:46 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:11:49.583773 | orchestrator | 2026-03-17 01:11:49 | INFO  | Task a92a785e-92a1-4140-bdb5-3e07a55a14db is in state STARTED 2026-03-17 01:11:49.584670 | orchestrator | 2026-03-17 01:11:49 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:11:49.584842 | orchestrator | 2026-03-17 01:11:49 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:11:52.631403 | orchestrator | 2026-03-17 01:11:52 | INFO  | Task a92a785e-92a1-4140-bdb5-3e07a55a14db is in state STARTED 2026-03-17 01:11:52.632227 | orchestrator | 2026-03-17 01:11:52 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:11:52.632254 | orchestrator | 2026-03-17 01:11:52 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:11:55.677445 | orchestrator | 2026-03-17 01:11:55 | INFO  | Task a92a785e-92a1-4140-bdb5-3e07a55a14db is in state STARTED 2026-03-17 01:11:55.677965 | orchestrator | 2026-03-17 01:11:55 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:11:55.678238 | orchestrator | 2026-03-17 01:11:55 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:11:58.718515 | orchestrator | 2026-03-17 01:11:58 | INFO  | Task a92a785e-92a1-4140-bdb5-3e07a55a14db is in state STARTED 2026-03-17 01:11:58.720444 | orchestrator | 2026-03-17 01:11:58 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:11:58.720483 | orchestrator | 2026-03-17 01:11:58 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:12:01.755527 | orchestrator | 2026-03-17 01:12:01 | INFO  | Task a92a785e-92a1-4140-bdb5-3e07a55a14db is in state STARTED 2026-03-17 01:12:01.757266 | orchestrator | 2026-03-17 01:12:01 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:12:01.757454 | orchestrator | 2026-03-17 01:12:01 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:12:04.794302 | orchestrator | 2026-03-17 01:12:04 | INFO  | Task a92a785e-92a1-4140-bdb5-3e07a55a14db is in state STARTED 2026-03-17 01:12:04.795966 | orchestrator | 2026-03-17 01:12:04 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:12:04.796141 | orchestrator | 2026-03-17 01:12:04 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:12:07.841319 | orchestrator | 2026-03-17 01:12:07 | INFO  | Task a92a785e-92a1-4140-bdb5-3e07a55a14db is in state STARTED 2026-03-17 01:12:07.842517 | orchestrator | 2026-03-17 01:12:07 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:12:07.842549 | orchestrator | 2026-03-17 01:12:07 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:12:10.883407 | orchestrator | 2026-03-17 01:12:10 | INFO  | Task a92a785e-92a1-4140-bdb5-3e07a55a14db is in state STARTED 2026-03-17 01:12:10.885713 | orchestrator | 2026-03-17 01:12:10 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:12:10.885805 | orchestrator | 2026-03-17 01:12:10 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:12:13.924511 | orchestrator | 2026-03-17 01:12:13 | INFO  | Task a92a785e-92a1-4140-bdb5-3e07a55a14db is in state STARTED 2026-03-17 01:12:13.924568 | orchestrator | 2026-03-17 01:12:13 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:12:13.924575 | orchestrator | 2026-03-17 01:12:13 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:12:16.972043 | orchestrator | 2026-03-17 01:12:16 | INFO  | Task a92a785e-92a1-4140-bdb5-3e07a55a14db is in state STARTED 2026-03-17 01:12:16.973178 | orchestrator | 2026-03-17 01:12:16 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:12:16.973244 | orchestrator | 2026-03-17 01:12:16 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:12:20.024941 | orchestrator | 2026-03-17 01:12:20 | INFO  | Task a92a785e-92a1-4140-bdb5-3e07a55a14db is in state STARTED 2026-03-17 01:12:20.026743 | orchestrator | 2026-03-17 01:12:20 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:12:20.026792 | orchestrator | 2026-03-17 01:12:20 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:12:23.073901 | orchestrator | 2026-03-17 01:12:23 | INFO  | Task a92a785e-92a1-4140-bdb5-3e07a55a14db is in state STARTED 2026-03-17 01:12:23.079784 | orchestrator | 2026-03-17 01:12:23 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:12:23.079829 | orchestrator | 2026-03-17 01:12:23 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:12:26.125569 | orchestrator | 2026-03-17 01:12:26 | INFO  | Task a92a785e-92a1-4140-bdb5-3e07a55a14db is in state STARTED 2026-03-17 01:12:26.127046 | orchestrator | 2026-03-17 01:12:26 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:12:26.127093 | orchestrator | 2026-03-17 01:12:26 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:12:29.172566 | orchestrator | 2026-03-17 01:12:29 | INFO  | Task a92a785e-92a1-4140-bdb5-3e07a55a14db is in state STARTED 2026-03-17 01:12:29.172863 | orchestrator | 2026-03-17 01:12:29 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:12:29.172894 | orchestrator | 2026-03-17 01:12:29 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:12:32.207285 | orchestrator | 2026-03-17 01:12:32 | INFO  | Task a92a785e-92a1-4140-bdb5-3e07a55a14db is in state STARTED 2026-03-17 01:12:32.209594 | orchestrator | 2026-03-17 01:12:32 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:12:32.209660 | orchestrator | 2026-03-17 01:12:32 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:12:35.254371 | orchestrator | 2026-03-17 01:12:35 | INFO  | Task a92a785e-92a1-4140-bdb5-3e07a55a14db is in state STARTED 2026-03-17 01:12:35.254557 | orchestrator | 2026-03-17 01:12:35 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:12:35.254617 | orchestrator | 2026-03-17 01:12:35 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:12:38.298711 | orchestrator | 2026-03-17 01:12:38 | INFO  | Task a92a785e-92a1-4140-bdb5-3e07a55a14db is in state STARTED 2026-03-17 01:12:38.300936 | orchestrator | 2026-03-17 01:12:38 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:12:38.301052 | orchestrator | 2026-03-17 01:12:38 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:12:41.337155 | orchestrator | 2026-03-17 01:12:41 | INFO  | Task a92a785e-92a1-4140-bdb5-3e07a55a14db is in state STARTED 2026-03-17 01:12:41.338264 | orchestrator | 2026-03-17 01:12:41 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:12:41.338309 | orchestrator | 2026-03-17 01:12:41 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:12:44.379375 | orchestrator | 2026-03-17 01:12:44 | INFO  | Task a92a785e-92a1-4140-bdb5-3e07a55a14db is in state STARTED 2026-03-17 01:12:44.380806 | orchestrator | 2026-03-17 01:12:44 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:12:44.381000 | orchestrator | 2026-03-17 01:12:44 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:12:47.419626 | orchestrator | 2026-03-17 01:12:47 | INFO  | Task a92a785e-92a1-4140-bdb5-3e07a55a14db is in state STARTED 2026-03-17 01:12:47.421812 | orchestrator | 2026-03-17 01:12:47 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:12:47.422169 | orchestrator | 2026-03-17 01:12:47 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:12:50.457172 | orchestrator | 2026-03-17 01:12:50 | INFO  | Task a92a785e-92a1-4140-bdb5-3e07a55a14db is in state STARTED 2026-03-17 01:12:50.458163 | orchestrator | 2026-03-17 01:12:50 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:12:50.458195 | orchestrator | 2026-03-17 01:12:50 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:12:53.503771 | orchestrator | 2026-03-17 01:12:53 | INFO  | Task a92a785e-92a1-4140-bdb5-3e07a55a14db is in state STARTED 2026-03-17 01:12:53.505061 | orchestrator | 2026-03-17 01:12:53 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:12:53.505101 | orchestrator | 2026-03-17 01:12:53 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:12:56.553032 | orchestrator | 2026-03-17 01:12:56 | INFO  | Task a92a785e-92a1-4140-bdb5-3e07a55a14db is in state STARTED 2026-03-17 01:12:56.558395 | orchestrator | 2026-03-17 01:12:56 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:12:56.558451 | orchestrator | 2026-03-17 01:12:56 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:12:59.615510 | orchestrator | 2026-03-17 01:12:59 | INFO  | Task a92a785e-92a1-4140-bdb5-3e07a55a14db is in state STARTED 2026-03-17 01:12:59.616480 | orchestrator | 2026-03-17 01:12:59 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:12:59.616513 | orchestrator | 2026-03-17 01:12:59 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:13:02.660605 | orchestrator | 2026-03-17 01:13:02 | INFO  | Task a92a785e-92a1-4140-bdb5-3e07a55a14db is in state STARTED 2026-03-17 01:13:02.664469 | orchestrator | 2026-03-17 01:13:02 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:13:02.664988 | orchestrator | 2026-03-17 01:13:02 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:13:05.702503 | orchestrator | 2026-03-17 01:13:05 | INFO  | Task a92a785e-92a1-4140-bdb5-3e07a55a14db is in state STARTED 2026-03-17 01:13:05.703321 | orchestrator | 2026-03-17 01:13:05 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:13:05.703558 | orchestrator | 2026-03-17 01:13:05 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:13:08.750099 | orchestrator | 2026-03-17 01:13:08 | INFO  | Task a92a785e-92a1-4140-bdb5-3e07a55a14db is in state STARTED 2026-03-17 01:13:08.750157 | orchestrator | 2026-03-17 01:13:08 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:13:08.750171 | orchestrator | 2026-03-17 01:13:08 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:13:11.796145 | orchestrator | 2026-03-17 01:13:11 | INFO  | Task a92a785e-92a1-4140-bdb5-3e07a55a14db is in state STARTED 2026-03-17 01:13:11.798506 | orchestrator | 2026-03-17 01:13:11 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:13:11.798589 | orchestrator | 2026-03-17 01:13:11 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:13:14.841984 | orchestrator | 2026-03-17 01:13:14 | INFO  | Task a92a785e-92a1-4140-bdb5-3e07a55a14db is in state STARTED 2026-03-17 01:13:14.843408 | orchestrator | 2026-03-17 01:13:14 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:13:14.843476 | orchestrator | 2026-03-17 01:13:14 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:13:17.875275 | orchestrator | 2026-03-17 01:13:17 | INFO  | Task a92a785e-92a1-4140-bdb5-3e07a55a14db is in state STARTED 2026-03-17 01:13:17.875891 | orchestrator | 2026-03-17 01:13:17 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:13:17.875944 | orchestrator | 2026-03-17 01:13:17 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:13:20.900162 | orchestrator | 2026-03-17 01:13:20 | INFO  | Task a92a785e-92a1-4140-bdb5-3e07a55a14db is in state STARTED 2026-03-17 01:13:20.901174 | orchestrator | 2026-03-17 01:13:20 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:13:20.901249 | orchestrator | 2026-03-17 01:13:20 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:13:23.928580 | orchestrator | 2026-03-17 01:13:23 | INFO  | Task a92a785e-92a1-4140-bdb5-3e07a55a14db is in state STARTED 2026-03-17 01:13:23.929719 | orchestrator | 2026-03-17 01:13:23 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:13:23.929764 | orchestrator | 2026-03-17 01:13:23 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:13:26.957939 | orchestrator | 2026-03-17 01:13:26 | INFO  | Task a92a785e-92a1-4140-bdb5-3e07a55a14db is in state STARTED 2026-03-17 01:13:26.961654 | orchestrator | 2026-03-17 01:13:26 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:13:26.961707 | orchestrator | 2026-03-17 01:13:26 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:13:30.011794 | orchestrator | 2026-03-17 01:13:30 | INFO  | Task a92a785e-92a1-4140-bdb5-3e07a55a14db is in state STARTED 2026-03-17 01:13:30.012451 | orchestrator | 2026-03-17 01:13:30 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:13:30.012493 | orchestrator | 2026-03-17 01:13:30 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:13:33.055484 | orchestrator | 2026-03-17 01:13:33 | INFO  | Task a92a785e-92a1-4140-bdb5-3e07a55a14db is in state STARTED 2026-03-17 01:13:33.057125 | orchestrator | 2026-03-17 01:13:33 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:13:33.057174 | orchestrator | 2026-03-17 01:13:33 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:13:36.107792 | orchestrator | 2026-03-17 01:13:36 | INFO  | Task a92a785e-92a1-4140-bdb5-3e07a55a14db is in state STARTED 2026-03-17 01:13:36.107866 | orchestrator | 2026-03-17 01:13:36 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:13:36.108274 | orchestrator | 2026-03-17 01:13:36 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:13:39.150563 | orchestrator | 2026-03-17 01:13:39 | INFO  | Task a92a785e-92a1-4140-bdb5-3e07a55a14db is in state STARTED 2026-03-17 01:13:39.153811 | orchestrator | 2026-03-17 01:13:39 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:13:39.153853 | orchestrator | 2026-03-17 01:13:39 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:13:42.193207 | orchestrator | 2026-03-17 01:13:42 | INFO  | Task a92a785e-92a1-4140-bdb5-3e07a55a14db is in state STARTED 2026-03-17 01:13:42.195304 | orchestrator | 2026-03-17 01:13:42 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:13:42.195353 | orchestrator | 2026-03-17 01:13:42 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:13:45.228479 | orchestrator | 2026-03-17 01:13:45 | INFO  | Task a92a785e-92a1-4140-bdb5-3e07a55a14db is in state STARTED 2026-03-17 01:13:45.229022 | orchestrator | 2026-03-17 01:13:45 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:13:45.229051 | orchestrator | 2026-03-17 01:13:45 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:13:48.267830 | orchestrator | 2026-03-17 01:13:48 | INFO  | Task a92a785e-92a1-4140-bdb5-3e07a55a14db is in state STARTED 2026-03-17 01:13:48.268596 | orchestrator | 2026-03-17 01:13:48 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:13:48.268637 | orchestrator | 2026-03-17 01:13:48 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:13:51.309415 | orchestrator | 2026-03-17 01:13:51 | INFO  | Task a92a785e-92a1-4140-bdb5-3e07a55a14db is in state STARTED 2026-03-17 01:13:51.309732 | orchestrator | 2026-03-17 01:13:51 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:13:51.309762 | orchestrator | 2026-03-17 01:13:51 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:13:54.349365 | orchestrator | 2026-03-17 01:13:54 | INFO  | Task a92a785e-92a1-4140-bdb5-3e07a55a14db is in state STARTED 2026-03-17 01:13:54.349585 | orchestrator | 2026-03-17 01:13:54 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:13:54.349648 | orchestrator | 2026-03-17 01:13:54 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:13:57.401585 | orchestrator | 2026-03-17 01:13:57 | INFO  | Task a92a785e-92a1-4140-bdb5-3e07a55a14db is in state STARTED 2026-03-17 01:13:57.403105 | orchestrator | 2026-03-17 01:13:57 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:13:57.403157 | orchestrator | 2026-03-17 01:13:57 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:14:00.443591 | orchestrator | 2026-03-17 01:14:00 | INFO  | Task a92a785e-92a1-4140-bdb5-3e07a55a14db is in state STARTED 2026-03-17 01:14:00.445750 | orchestrator | 2026-03-17 01:14:00 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:14:00.446168 | orchestrator | 2026-03-17 01:14:00 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:14:03.494599 | orchestrator | 2026-03-17 01:14:03 | INFO  | Task a92a785e-92a1-4140-bdb5-3e07a55a14db is in state STARTED 2026-03-17 01:14:03.496641 | orchestrator | 2026-03-17 01:14:03 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:14:03.496813 | orchestrator | 2026-03-17 01:14:03 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:14:06.543597 | orchestrator | 2026-03-17 01:14:06 | INFO  | Task a92a785e-92a1-4140-bdb5-3e07a55a14db is in state STARTED 2026-03-17 01:14:06.544802 | orchestrator | 2026-03-17 01:14:06 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:14:06.545009 | orchestrator | 2026-03-17 01:14:06 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:14:09.589742 | orchestrator | 2026-03-17 01:14:09 | INFO  | Task a92a785e-92a1-4140-bdb5-3e07a55a14db is in state STARTED 2026-03-17 01:14:09.591900 | orchestrator | 2026-03-17 01:14:09 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:14:09.591941 | orchestrator | 2026-03-17 01:14:09 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:14:12.632414 | orchestrator | 2026-03-17 01:14:12 | INFO  | Task a92a785e-92a1-4140-bdb5-3e07a55a14db is in state STARTED 2026-03-17 01:14:12.634353 | orchestrator | 2026-03-17 01:14:12 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:14:12.634399 | orchestrator | 2026-03-17 01:14:12 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:14:15.662900 | orchestrator | 2026-03-17 01:14:15 | INFO  | Task a92a785e-92a1-4140-bdb5-3e07a55a14db is in state STARTED 2026-03-17 01:14:15.665307 | orchestrator | 2026-03-17 01:14:15 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:14:15.665360 | orchestrator | 2026-03-17 01:14:15 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:14:18.689720 | orchestrator | 2026-03-17 01:14:18 | INFO  | Task a92a785e-92a1-4140-bdb5-3e07a55a14db is in state STARTED 2026-03-17 01:14:18.691170 | orchestrator | 2026-03-17 01:14:18 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:14:18.691229 | orchestrator | 2026-03-17 01:14:18 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:14:21.717709 | orchestrator | 2026-03-17 01:14:21 | INFO  | Task a92a785e-92a1-4140-bdb5-3e07a55a14db is in state STARTED 2026-03-17 01:14:21.718415 | orchestrator | 2026-03-17 01:14:21 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:14:21.718443 | orchestrator | 2026-03-17 01:14:21 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:14:24.743245 | orchestrator | 2026-03-17 01:14:24 | INFO  | Task a92a785e-92a1-4140-bdb5-3e07a55a14db is in state STARTED 2026-03-17 01:14:24.745111 | orchestrator | 2026-03-17 01:14:24 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:14:24.745177 | orchestrator | 2026-03-17 01:14:24 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:14:27.787135 | orchestrator | 2026-03-17 01:14:27 | INFO  | Task a92a785e-92a1-4140-bdb5-3e07a55a14db is in state STARTED 2026-03-17 01:14:27.788584 | orchestrator | 2026-03-17 01:14:27 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:14:27.788621 | orchestrator | 2026-03-17 01:14:27 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:14:30.833029 | orchestrator | 2026-03-17 01:14:30 | INFO  | Task a92a785e-92a1-4140-bdb5-3e07a55a14db is in state STARTED 2026-03-17 01:14:30.833955 | orchestrator | 2026-03-17 01:14:30 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:14:30.834678 | orchestrator | 2026-03-17 01:14:30 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:14:33.867965 | orchestrator | 2026-03-17 01:14:33 | INFO  | Task a92a785e-92a1-4140-bdb5-3e07a55a14db is in state STARTED 2026-03-17 01:14:33.870144 | orchestrator | 2026-03-17 01:14:33 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:14:33.870185 | orchestrator | 2026-03-17 01:14:33 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:14:36.922652 | orchestrator | 2026-03-17 01:14:36 | INFO  | Task a92a785e-92a1-4140-bdb5-3e07a55a14db is in state STARTED 2026-03-17 01:14:36.924771 | orchestrator | 2026-03-17 01:14:36 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:14:36.924898 | orchestrator | 2026-03-17 01:14:36 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:14:39.972314 | orchestrator | 2026-03-17 01:14:39 | INFO  | Task a92a785e-92a1-4140-bdb5-3e07a55a14db is in state STARTED 2026-03-17 01:14:39.975035 | orchestrator | 2026-03-17 01:14:39 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:14:39.975108 | orchestrator | 2026-03-17 01:14:39 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:14:43.021160 | orchestrator | 2026-03-17 01:14:43 | INFO  | Task a92a785e-92a1-4140-bdb5-3e07a55a14db is in state STARTED 2026-03-17 01:14:43.025352 | orchestrator | 2026-03-17 01:14:43 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:14:43.025411 | orchestrator | 2026-03-17 01:14:43 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:14:46.068222 | orchestrator | 2026-03-17 01:14:46 | INFO  | Task a92a785e-92a1-4140-bdb5-3e07a55a14db is in state STARTED 2026-03-17 01:14:46.069682 | orchestrator | 2026-03-17 01:14:46 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:14:46.069734 | orchestrator | 2026-03-17 01:14:46 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:14:49.111297 | orchestrator | 2026-03-17 01:14:49 | INFO  | Task a92a785e-92a1-4140-bdb5-3e07a55a14db is in state STARTED 2026-03-17 01:14:49.112868 | orchestrator | 2026-03-17 01:14:49 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:14:49.112917 | orchestrator | 2026-03-17 01:14:49 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:14:52.159143 | orchestrator | 2026-03-17 01:14:52 | INFO  | Task a92a785e-92a1-4140-bdb5-3e07a55a14db is in state STARTED 2026-03-17 01:14:52.161011 | orchestrator | 2026-03-17 01:14:52 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:14:52.161066 | orchestrator | 2026-03-17 01:14:52 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:14:55.192194 | orchestrator | 2026-03-17 01:14:55 | INFO  | Task a92a785e-92a1-4140-bdb5-3e07a55a14db is in state STARTED 2026-03-17 01:14:55.195860 | orchestrator | 2026-03-17 01:14:55 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:14:55.195914 | orchestrator | 2026-03-17 01:14:55 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:14:58.241749 | orchestrator | 2026-03-17 01:14:58 | INFO  | Task a92a785e-92a1-4140-bdb5-3e07a55a14db is in state STARTED 2026-03-17 01:14:58.245698 | orchestrator | 2026-03-17 01:14:58 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state STARTED 2026-03-17 01:14:58.246157 | orchestrator | 2026-03-17 01:14:58 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:15:01.292617 | orchestrator | 2026-03-17 01:15:01 | INFO  | Task a92a785e-92a1-4140-bdb5-3e07a55a14db is in state STARTED 2026-03-17 01:17:01.409371 | orchestrator | 2026-03-17 01:17:01.409424 | orchestrator | 2026-03-17 01:17:01.409429 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-17 01:17:01.409434 | orchestrator | 2026-03-17 01:17:01.409438 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-17 01:17:01.409442 | orchestrator | Tuesday 17 March 2026 01:07:42 +0000 (0:00:00.178) 0:00:00.178 ********* 2026-03-17 01:17:01.409446 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:17:01.409451 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:17:01.409455 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:17:01.409458 | orchestrator | 2026-03-17 01:17:01.409462 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-17 01:17:01.409467 | orchestrator | Tuesday 17 March 2026 01:07:42 +0000 (0:00:00.286) 0:00:00.464 ********* 2026-03-17 01:17:01.409471 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2026-03-17 01:17:01.409475 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2026-03-17 01:17:01.409478 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2026-03-17 01:17:01.409482 | orchestrator | 2026-03-17 01:17:01.409486 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2026-03-17 01:17:01.409489 | orchestrator | 2026-03-17 01:17:01.409493 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2026-03-17 01:17:01.409497 | orchestrator | Tuesday 17 March 2026 01:07:43 +0000 (0:00:00.410) 0:00:00.874 ********* 2026-03-17 01:17:01.409501 | orchestrator | 2026-03-17 01:17:01.409504 | orchestrator | STILL ALIVE [task 'Waiting for Nova public port to be UP' is running] ********** 2026-03-17 01:17:01.409508 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:17:01.409513 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:17:01.409520 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:17:01.409526 | orchestrator | 2026-03-17 01:17:01.409532 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 01:17:01.409539 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 01:17:01.409546 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 01:17:01.409552 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 01:17:01.409557 | orchestrator | 2026-03-17 01:17:01.409563 | orchestrator | 2026-03-17 01:17:01.409612 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 01:17:01.409621 | orchestrator | Tuesday 17 March 2026 01:10:35 +0000 (0:02:52.044) 0:02:52.919 ********* 2026-03-17 01:17:01.409627 | orchestrator | =============================================================================== 2026-03-17 01:17:01.409633 | orchestrator | Waiting for Nova public port to be UP --------------------------------- 172.04s 2026-03-17 01:17:01.409657 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.41s 2026-03-17 01:17:01.409663 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.29s 2026-03-17 01:17:01.409668 | orchestrator | 2026-03-17 01:17:01.409674 | orchestrator | 2026-03-17 01:17:01.409679 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-17 01:17:01.409685 | orchestrator | 2026-03-17 01:17:01.409698 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-03-17 01:17:01.409704 | orchestrator | Tuesday 17 March 2026 01:06:43 +0000 (0:00:00.358) 0:00:00.358 ********* 2026-03-17 01:17:01.409710 | orchestrator | changed: [testbed-manager] 2026-03-17 01:17:01.409717 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:17:01.409723 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:17:01.409744 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:17:01.409752 | orchestrator | changed: [testbed-node-3] 2026-03-17 01:17:01.409758 | orchestrator | changed: [testbed-node-4] 2026-03-17 01:17:01.409764 | orchestrator | changed: [testbed-node-5] 2026-03-17 01:17:01.409782 | orchestrator | 2026-03-17 01:17:01.409788 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-17 01:17:01.409795 | orchestrator | Tuesday 17 March 2026 01:06:44 +0000 (0:00:00.836) 0:00:01.194 ********* 2026-03-17 01:17:01.409801 | orchestrator | changed: [testbed-manager] 2026-03-17 01:17:01.409807 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:17:01.409814 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:17:01.409820 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:17:01.409826 | orchestrator | changed: [testbed-node-3] 2026-03-17 01:17:01.409833 | orchestrator | changed: [testbed-node-4] 2026-03-17 01:17:01.409839 | orchestrator | changed: [testbed-node-5] 2026-03-17 01:17:01.409845 | orchestrator | 2026-03-17 01:17:01.409850 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-17 01:17:01.409856 | orchestrator | Tuesday 17 March 2026 01:06:45 +0000 (0:00:00.713) 0:00:01.908 ********* 2026-03-17 01:17:01.409861 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-03-17 01:17:01.409867 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-03-17 01:17:01.409873 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-03-17 01:17:01.409878 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-03-17 01:17:01.409884 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-03-17 01:17:01.409890 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-03-17 01:17:01.409896 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-03-17 01:17:01.409902 | orchestrator | 2026-03-17 01:17:01.409908 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-03-17 01:17:01.409914 | orchestrator | 2026-03-17 01:17:01.409920 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-03-17 01:17:01.409925 | orchestrator | Tuesday 17 March 2026 01:06:45 +0000 (0:00:00.683) 0:00:02.591 ********* 2026-03-17 01:17:01.409932 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:17:01.409938 | orchestrator | 2026-03-17 01:17:01.409944 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-03-17 01:17:01.409962 | orchestrator | Tuesday 17 March 2026 01:06:46 +0000 (0:00:00.654) 0:00:03.245 ********* 2026-03-17 01:17:01.409969 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2026-03-17 01:17:01.409976 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2026-03-17 01:17:01.409982 | orchestrator | 2026-03-17 01:17:01.409989 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-03-17 01:17:01.409996 | orchestrator | Tuesday 17 March 2026 01:06:50 +0000 (0:00:04.207) 0:00:07.452 ********* 2026-03-17 01:17:01.410002 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-17 01:17:01.410009 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-17 01:17:01.410044 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:17:01.410049 | orchestrator | 2026-03-17 01:17:01.410134 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-03-17 01:17:01.410141 | orchestrator | Tuesday 17 March 2026 01:06:55 +0000 (0:00:04.467) 0:00:11.920 ********* 2026-03-17 01:17:01.410147 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:17:01.410153 | orchestrator | 2026-03-17 01:17:01.410158 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-03-17 01:17:01.410164 | orchestrator | Tuesday 17 March 2026 01:06:56 +0000 (0:00:01.079) 0:00:13.000 ********* 2026-03-17 01:17:01.410171 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:17:01.410176 | orchestrator | 2026-03-17 01:17:01.410183 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-03-17 01:17:01.410189 | orchestrator | Tuesday 17 March 2026 01:06:58 +0000 (0:00:01.955) 0:00:14.955 ********* 2026-03-17 01:17:01.410196 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:17:01.410202 | orchestrator | 2026-03-17 01:17:01.410208 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-17 01:17:01.410223 | orchestrator | Tuesday 17 March 2026 01:07:01 +0000 (0:00:03.549) 0:00:18.505 ********* 2026-03-17 01:17:01.410229 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:17:01.410236 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:17:01.410243 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:17:01.410249 | orchestrator | 2026-03-17 01:17:01.410256 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-03-17 01:17:01.410262 | orchestrator | Tuesday 17 March 2026 01:07:02 +0000 (0:00:00.661) 0:00:19.167 ********* 2026-03-17 01:17:01.410269 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:17:01.410276 | orchestrator | 2026-03-17 01:17:01.410283 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-03-17 01:17:01.410288 | orchestrator | Tuesday 17 March 2026 01:07:32 +0000 (0:00:30.051) 0:00:49.218 ********* 2026-03-17 01:17:01.410293 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:17:01.410297 | orchestrator | 2026-03-17 01:17:01.410302 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-17 01:17:01.410306 | orchestrator | Tuesday 17 March 2026 01:07:47 +0000 (0:00:14.443) 0:01:03.662 ********* 2026-03-17 01:17:01.410311 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:17:01.410315 | orchestrator | 2026-03-17 01:17:01.410319 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-17 01:17:01.410324 | orchestrator | Tuesday 17 March 2026 01:07:59 +0000 (0:00:12.548) 0:01:16.211 ********* 2026-03-17 01:17:01.410328 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:17:01.410332 | orchestrator | 2026-03-17 01:17:01.410337 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-03-17 01:17:01.410341 | orchestrator | Tuesday 17 March 2026 01:08:00 +0000 (0:00:00.759) 0:01:16.970 ********* 2026-03-17 01:17:01.410345 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:17:01.410350 | orchestrator | 2026-03-17 01:17:01.410354 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-17 01:17:01.410358 | orchestrator | Tuesday 17 March 2026 01:08:00 +0000 (0:00:00.450) 0:01:17.421 ********* 2026-03-17 01:17:01.410367 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:17:01.410372 | orchestrator | 2026-03-17 01:17:01.410376 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-03-17 01:17:01.410380 | orchestrator | Tuesday 17 March 2026 01:08:01 +0000 (0:00:00.661) 0:01:18.082 ********* 2026-03-17 01:17:01.410385 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:17:01.410389 | orchestrator | 2026-03-17 01:17:01.410393 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-03-17 01:17:01.410398 | orchestrator | Tuesday 17 March 2026 01:08:18 +0000 (0:00:16.731) 0:01:34.813 ********* 2026-03-17 01:17:01.410402 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:17:01.410406 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:17:01.410411 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:17:01.410415 | orchestrator | 2026-03-17 01:17:01.410419 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-03-17 01:17:01.410424 | orchestrator | 2026-03-17 01:17:01.410428 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-03-17 01:17:01.410433 | orchestrator | Tuesday 17 March 2026 01:08:18 +0000 (0:00:00.298) 0:01:35.111 ********* 2026-03-17 01:17:01.410437 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:17:01.410441 | orchestrator | 2026-03-17 01:17:01.410445 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-03-17 01:17:01.410450 | orchestrator | Tuesday 17 March 2026 01:08:19 +0000 (0:00:00.772) 0:01:35.884 ********* 2026-03-17 01:17:01.410454 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:17:01.410459 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:17:01.410463 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:17:01.410467 | orchestrator | 2026-03-17 01:17:01.410472 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-03-17 01:17:01.410479 | orchestrator | Tuesday 17 March 2026 01:08:21 +0000 (0:00:01.799) 0:01:37.683 ********* 2026-03-17 01:17:01.410483 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:17:01.410487 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:17:01.410492 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:17:01.410496 | orchestrator | 2026-03-17 01:17:01.410501 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-03-17 01:17:01.410510 | orchestrator | Tuesday 17 March 2026 01:08:23 +0000 (0:00:02.063) 0:01:39.747 ********* 2026-03-17 01:17:01.410515 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:17:01.410519 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:17:01.410524 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:17:01.410528 | orchestrator | 2026-03-17 01:17:01.410532 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-03-17 01:17:01.410537 | orchestrator | Tuesday 17 March 2026 01:08:23 +0000 (0:00:00.762) 0:01:40.510 ********* 2026-03-17 01:17:01.410542 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-17 01:17:01.410546 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:17:01.410550 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-17 01:17:01.410555 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:17:01.410559 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-17 01:17:01.410564 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-03-17 01:17:01.410568 | orchestrator | 2026-03-17 01:17:01.410644 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-03-17 01:17:01.410651 | orchestrator | Tuesday 17 March 2026 01:08:31 +0000 (0:00:07.786) 0:01:48.297 ********* 2026-03-17 01:17:01.410657 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:17:01.410664 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:17:01.410671 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:17:01.410677 | orchestrator | 2026-03-17 01:17:01.410684 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-03-17 01:17:01.410691 | orchestrator | Tuesday 17 March 2026 01:08:31 +0000 (0:00:00.296) 0:01:48.593 ********* 2026-03-17 01:17:01.410698 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-17 01:17:01.410704 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:17:01.410712 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-17 01:17:01.410717 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:17:01.410722 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-17 01:17:01.410726 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:17:01.410730 | orchestrator | 2026-03-17 01:17:01.410734 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-03-17 01:17:01.410739 | orchestrator | Tuesday 17 March 2026 01:08:33 +0000 (0:00:01.190) 0:01:49.784 ********* 2026-03-17 01:17:01.410743 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:17:01.410747 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:17:01.410752 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:17:01.410756 | orchestrator | 2026-03-17 01:17:01.410761 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-03-17 01:17:01.410768 | orchestrator | Tuesday 17 March 2026 01:08:33 +0000 (0:00:00.530) 0:01:50.314 ********* 2026-03-17 01:17:01.410778 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:17:01.410784 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:17:01.410790 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:17:01.410797 | orchestrator | 2026-03-17 01:17:01.410803 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-03-17 01:17:01.410809 | orchestrator | Tuesday 17 March 2026 01:08:34 +0000 (0:00:00.981) 0:01:51.295 ********* 2026-03-17 01:17:01.410815 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:17:01.410822 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:17:01.410829 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:17:01.410835 | orchestrator | 2026-03-17 01:17:01.410891 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-03-17 01:17:01.410897 | orchestrator | Tuesday 17 March 2026 01:08:36 +0000 (0:00:02.159) 0:01:53.455 ********* 2026-03-17 01:17:01.410901 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:17:01.410906 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:17:01.410910 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:17:01.410915 | orchestrator | 2026-03-17 01:17:01.410919 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-17 01:17:01.410932 | orchestrator | Tuesday 17 March 2026 01:08:57 +0000 (0:00:21.006) 0:02:14.462 ********* 2026-03-17 01:17:01.410937 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:17:01.410941 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:17:01.410945 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:17:01.410950 | orchestrator | 2026-03-17 01:17:01.410954 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-17 01:17:01.410964 | orchestrator | Tuesday 17 March 2026 01:09:12 +0000 (0:00:14.194) 0:02:28.656 ********* 2026-03-17 01:17:01.410968 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:17:01.410979 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:17:01.410983 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:17:01.410988 | orchestrator | 2026-03-17 01:17:01.410992 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-03-17 01:17:01.410996 | orchestrator | Tuesday 17 March 2026 01:09:12 +0000 (0:00:00.862) 0:02:29.519 ********* 2026-03-17 01:17:01.411001 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:17:01.411005 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:17:01.411009 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:17:01.411013 | orchestrator | 2026-03-17 01:17:01.411021 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-03-17 01:17:01.411026 | orchestrator | Tuesday 17 March 2026 01:09:24 +0000 (0:00:12.020) 0:02:41.539 ********* 2026-03-17 01:17:01.411030 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:17:01.411034 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:17:01.411039 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:17:01.411043 | orchestrator | 2026-03-17 01:17:01.411047 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-03-17 01:17:01.411052 | orchestrator | Tuesday 17 March 2026 01:09:25 +0000 (0:00:01.047) 0:02:42.586 ********* 2026-03-17 01:17:01.411056 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:17:01.411060 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:17:01.411065 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:17:01.411126 | orchestrator | 2026-03-17 01:17:01.411134 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-03-17 01:17:01.411140 | orchestrator | 2026-03-17 01:17:01.411146 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-17 01:17:01.411153 | orchestrator | Tuesday 17 March 2026 01:09:26 +0000 (0:00:00.287) 0:02:42.874 ********* 2026-03-17 01:17:01.411166 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:17:01.411174 | orchestrator | 2026-03-17 01:17:01.411180 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2026-03-17 01:17:01.411187 | orchestrator | Tuesday 17 March 2026 01:09:26 +0000 (0:00:00.608) 0:02:43.483 ********* 2026-03-17 01:17:01.411193 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-03-17 01:17:01.411201 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2026-03-17 01:17:01.411208 | orchestrator | 2026-03-17 01:17:01.411215 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2026-03-17 01:17:01.411222 | orchestrator | Tuesday 17 March 2026 01:09:30 +0000 (0:00:03.361) 0:02:46.844 ********* 2026-03-17 01:17:01.411229 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-03-17 01:17:01.411237 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-03-17 01:17:01.411248 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-03-17 01:17:01.411253 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-03-17 01:17:01.411257 | orchestrator | 2026-03-17 01:17:01.411262 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-03-17 01:17:01.411267 | orchestrator | Tuesday 17 March 2026 01:09:36 +0000 (0:00:06.801) 0:02:53.646 ********* 2026-03-17 01:17:01.411275 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-17 01:17:01.411284 | orchestrator | 2026-03-17 01:17:01.411291 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-03-17 01:17:01.411297 | orchestrator | Tuesday 17 March 2026 01:09:39 +0000 (0:00:02.894) 0:02:56.541 ********* 2026-03-17 01:17:01.411303 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-03-17 01:17:01.411309 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-17 01:17:01.411315 | orchestrator | 2026-03-17 01:17:01.411321 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-03-17 01:17:01.411327 | orchestrator | Tuesday 17 March 2026 01:09:43 +0000 (0:00:03.356) 0:02:59.897 ********* 2026-03-17 01:17:01.411333 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-17 01:17:01.411339 | orchestrator | 2026-03-17 01:17:01.411346 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2026-03-17 01:17:01.411352 | orchestrator | Tuesday 17 March 2026 01:09:46 +0000 (0:00:02.916) 0:03:02.813 ********* 2026-03-17 01:17:01.411359 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2026-03-17 01:17:01.411366 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2026-03-17 01:17:01.411373 | orchestrator | 2026-03-17 01:17:01.411379 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-03-17 01:17:01.411383 | orchestrator | Tuesday 17 March 2026 01:09:52 +0000 (0:00:06.241) 0:03:09.054 ********* 2026-03-17 01:17:01.411395 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-17 01:17:01.411402 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-17 01:17:01.411413 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-17 01:17:01.411423 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-17 01:17:01.411431 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-17 01:17:01.411436 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-17 01:17:01.411440 | orchestrator | 2026-03-17 01:17:01.411445 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-03-17 01:17:01.411449 | orchestrator | Tuesday 17 March 2026 01:09:54 +0000 (0:00:01.834) 0:03:10.889 ********* 2026-03-17 01:17:01.411454 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:17:01.411458 | orchestrator | 2026-03-17 01:17:01.411463 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-03-17 01:17:01.411473 | orchestrator | Tuesday 17 March 2026 01:09:54 +0000 (0:00:00.124) 0:03:11.014 ********* 2026-03-17 01:17:01.411482 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:17:01.411490 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:17:01.411497 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:17:01.411503 | orchestrator | 2026-03-17 01:17:01.411510 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-03-17 01:17:01.411522 | orchestrator | Tuesday 17 March 2026 01:09:54 +0000 (0:00:00.297) 0:03:11.311 ********* 2026-03-17 01:17:01.411529 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-17 01:17:01.411535 | orchestrator | 2026-03-17 01:17:01.411542 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-03-17 01:17:01.411549 | orchestrator | Tuesday 17 March 2026 01:09:55 +0000 (0:00:00.729) 0:03:12.041 ********* 2026-03-17 01:17:01.411556 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:17:01.411563 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:17:01.411590 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:17:01.411596 | orchestrator | 2026-03-17 01:17:01.411600 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-17 01:17:01.411605 | orchestrator | Tuesday 17 March 2026 01:09:55 +0000 (0:00:00.256) 0:03:12.298 ********* 2026-03-17 01:17:01.411609 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:17:01.411614 | orchestrator | 2026-03-17 01:17:01.411618 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-03-17 01:17:01.411622 | orchestrator | Tuesday 17 March 2026 01:09:56 +0000 (0:00:00.593) 0:03:12.891 ********* 2026-03-17 01:17:01.411628 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-17 01:17:01.411636 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-17 01:17:01.411649 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-17 01:17:01.411655 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-17 01:17:01.411659 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-17 01:17:01.411664 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-17 01:17:01.411668 | orchestrator | 2026-03-17 01:17:01.411675 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-03-17 01:17:01.411684 | orchestrator | Tuesday 17 March 2026 01:09:58 +0000 (0:00:01.996) 0:03:14.887 ********* 2026-03-17 01:17:01.411699 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-17 01:17:01.411715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 01:17:01.411722 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:17:01.411728 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-17 01:17:01.411734 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 01:17:01.411741 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:17:01.411750 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-17 01:17:01.411762 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 01:17:01.411789 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:17:01.411796 | orchestrator | 2026-03-17 01:17:01.411802 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-03-17 01:17:01.411809 | orchestrator | Tuesday 17 March 2026 01:09:58 +0000 (0:00:00.541) 0:03:15.429 ********* 2026-03-17 01:17:01.411820 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-17 01:17:01.411827 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 01:17:01.411834 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:17:01.411844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-17 01:17:01.411856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 01:17:01.411864 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:17:01.411877 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-17 01:17:01.411885 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 01:17:01.411892 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:17:01.411899 | orchestrator | 2026-03-17 01:17:01.411906 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-03-17 01:17:01.411912 | orchestrator | Tuesday 17 March 2026 01:09:59 +0000 (0:00:00.790) 0:03:16.219 ********* 2026-03-17 01:17:01.411920 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-17 01:17:01.411928 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-17 01:17:01.411961 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-17 01:17:01.411968 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-17 01:17:01.411979 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-17 01:17:01.411994 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-17 01:17:01.412000 | orchestrator | 2026-03-17 01:17:01.412005 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-03-17 01:17:01.412012 | orchestrator | Tuesday 17 March 2026 01:10:01 +0000 (0:00:02.061) 0:03:18.280 ********* 2026-03-17 01:17:01.412023 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-17 01:17:01.412030 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-17 01:17:01.412041 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-17 01:17:01.412051 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-17 01:17:01.412058 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-17 01:17:01.412068 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-17 01:17:01.412075 | orchestrator | 2026-03-17 01:17:01.412082 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-03-17 01:17:01.412089 | orchestrator | Tuesday 17 March 2026 01:10:06 +0000 (0:00:04.944) 0:03:23.225 ********* 2026-03-17 01:17:01.412096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-17 01:17:01.412104 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 01:17:01.412112 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:17:01.412119 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-17 01:17:01.412124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 01:17:01.412129 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:17:01.412137 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-17 01:17:01.412142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 01:17:01.412149 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:17:01.412153 | orchestrator | 2026-03-17 01:17:01.412158 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-03-17 01:17:01.412162 | orchestrator | Tuesday 17 March 2026 01:10:07 +0000 (0:00:00.574) 0:03:23.799 ********* 2026-03-17 01:17:01.412167 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:17:01.412171 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:17:01.412176 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:17:01.412180 | orchestrator | 2026-03-17 01:17:01.412184 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2026-03-17 01:17:01.412189 | orchestrator | Tuesday 17 March 2026 01:10:08 +0000 (0:00:01.685) 0:03:25.485 ********* 2026-03-17 01:17:01.412193 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:17:01.412198 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:17:01.412202 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:17:01.412206 | orchestrator | 2026-03-17 01:17:01.412210 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2026-03-17 01:17:01.412218 | orchestrator | Tuesday 17 March 2026 01:10:09 +0000 (0:00:00.300) 0:03:25.785 ********* 2026-03-17 01:17:01.412223 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-17 01:17:01.412233 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-17 01:17:01.412241 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-17 01:17:01.412248 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-17 01:17:01.412253 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-17 01:17:01.412258 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-17 01:17:01.412262 | orchestrator | 2026-03-17 01:17:01.412266 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-17 01:17:01.412271 | orchestrator | Tuesday 17 March 2026 01:10:10 +0000 (0:00:01.597) 0:03:27.383 ********* 2026-03-17 01:17:01.412275 | orchestrator | 2026-03-17 01:17:01.412282 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-17 01:17:01.412286 | orchestrator | Tuesday 17 March 2026 01:10:10 +0000 (0:00:00.135) 0:03:27.519 ********* 2026-03-17 01:17:01.412291 | orchestrator | 2026-03-17 01:17:01.412295 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-17 01:17:01.412299 | orchestrator | Tuesday 17 March 2026 01:10:10 +0000 (0:00:00.128) 0:03:27.647 ********* 2026-03-17 01:17:01.412304 | orchestrator | 2026-03-17 01:17:01.412309 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-03-17 01:17:01.412313 | orchestrator | Tuesday 17 March 2026 01:10:11 +0000 (0:00:00.271) 0:03:27.919 ********* 2026-03-17 01:17:01.412317 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:17:01.412324 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:17:01.412329 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:17:01.412333 | orchestrator | 2026-03-17 01:17:01.412337 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-03-17 01:17:01.412342 | orchestrator | Tuesday 17 March 2026 01:10:28 +0000 (0:00:17.005) 0:03:44.925 ********* 2026-03-17 01:17:01.412346 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:17:01.412351 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:17:01.412355 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:17:01.412359 | orchestrator | 2026-03-17 01:17:01.412364 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-03-17 01:17:01.412368 | orchestrator | 2026-03-17 01:17:01.412372 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-17 01:17:01.412377 | orchestrator | Tuesday 17 March 2026 01:10:37 +0000 (0:00:09.641) 0:03:54.566 ********* 2026-03-17 01:17:01.412381 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:17:01.412386 | orchestrator | 2026-03-17 01:17:01.412390 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-17 01:17:01.412395 | orchestrator | Tuesday 17 March 2026 01:10:38 +0000 (0:00:01.077) 0:03:55.644 ********* 2026-03-17 01:17:01.412399 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:17:01.412403 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:17:01.412407 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:17:01.412412 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:17:01.412416 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:17:01.412421 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:17:01.412425 | orchestrator | 2026-03-17 01:17:01.412430 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-03-17 01:17:01.412434 | orchestrator | Tuesday 17 March 2026 01:10:39 +0000 (0:00:00.646) 0:03:56.291 ********* 2026-03-17 01:17:01.412438 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:17:01.412443 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:17:01.412447 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:17:01.412452 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 01:17:01.412458 | orchestrator | 2026-03-17 01:17:01.412465 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-17 01:17:01.412472 | orchestrator | Tuesday 17 March 2026 01:10:40 +0000 (0:00:00.818) 0:03:57.110 ********* 2026-03-17 01:17:01.412482 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-03-17 01:17:01.412489 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-03-17 01:17:01.412495 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-03-17 01:17:01.412500 | orchestrator | 2026-03-17 01:17:01.412506 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-17 01:17:01.412512 | orchestrator | Tuesday 17 March 2026 01:10:41 +0000 (0:00:01.019) 0:03:58.130 ********* 2026-03-17 01:17:01.412521 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2026-03-17 01:17:01.412527 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2026-03-17 01:17:01.412534 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2026-03-17 01:17:01.412540 | orchestrator | 2026-03-17 01:17:01.412546 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-17 01:17:01.412553 | orchestrator | Tuesday 17 March 2026 01:10:42 +0000 (0:00:01.223) 0:03:59.353 ********* 2026-03-17 01:17:01.412559 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-03-17 01:17:01.412566 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:17:01.412604 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-03-17 01:17:01.412611 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:17:01.412617 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-03-17 01:17:01.412628 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:17:01.412635 | orchestrator | 2026-03-17 01:17:01.412641 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-03-17 01:17:01.412647 | orchestrator | Tuesday 17 March 2026 01:10:43 +0000 (0:00:00.589) 0:03:59.943 ********* 2026-03-17 01:17:01.412654 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-17 01:17:01.412660 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-17 01:17:01.412667 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:17:01.412671 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-17 01:17:01.412675 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-17 01:17:01.412678 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:17:01.412682 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-17 01:17:01.412686 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-17 01:17:01.412689 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:17:01.412693 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-17 01:17:01.412697 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-17 01:17:01.413077 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-17 01:17:01.413094 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-17 01:17:01.413098 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-17 01:17:01.413102 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-17 01:17:01.413106 | orchestrator | 2026-03-17 01:17:01.413110 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-03-17 01:17:01.413114 | orchestrator | Tuesday 17 March 2026 01:10:44 +0000 (0:00:01.078) 0:04:01.022 ********* 2026-03-17 01:17:01.413118 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:17:01.413122 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:17:01.413126 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:17:01.413129 | orchestrator | changed: [testbed-node-3] 2026-03-17 01:17:01.413133 | orchestrator | changed: [testbed-node-4] 2026-03-17 01:17:01.413137 | orchestrator | changed: [testbed-node-5] 2026-03-17 01:17:01.413141 | orchestrator | 2026-03-17 01:17:01.413144 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-03-17 01:17:01.413148 | orchestrator | Tuesday 17 March 2026 01:10:45 +0000 (0:00:00.998) 0:04:02.021 ********* 2026-03-17 01:17:01.413152 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:17:01.413156 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:17:01.413159 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:17:01.413164 | orchestrator | changed: [testbed-node-4] 2026-03-17 01:17:01.413167 | orchestrator | changed: [testbed-node-3] 2026-03-17 01:17:01.413171 | orchestrator | changed: [testbed-node-5] 2026-03-17 01:17:01.413175 | orchestrator | 2026-03-17 01:17:01.413178 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-03-17 01:17:01.413182 | orchestrator | Tuesday 17 March 2026 01:10:47 +0000 (0:00:01.845) 0:04:03.866 ********* 2026-03-17 01:17:01.413187 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-17 01:17:01.413200 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-17 01:17:01.413204 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-17 01:17:01.413221 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-17 01:17:01.413226 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-17 01:17:01.413230 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-17 01:17:01.413234 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-17 01:17:01.413243 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:17:01.413247 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-17 01:17:01.413251 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-17 01:17:01.413266 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-17 01:17:01.413271 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-17 01:17:01.413275 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:17:01.413283 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-17 01:17:01.413287 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:17:01.413291 | orchestrator | 2026-03-17 01:17:01.413295 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-17 01:17:01.413299 | orchestrator | Tuesday 17 March 2026 01:10:49 +0000 (0:00:02.159) 0:04:06.026 ********* 2026-03-17 01:17:01.413303 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:17:01.413308 | orchestrator | 2026-03-17 01:17:01.413311 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-03-17 01:17:01.413315 | orchestrator | Tuesday 17 March 2026 01:10:50 +0000 (0:00:01.021) 0:04:07.047 ********* 2026-03-17 01:17:01.413329 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-17 01:17:01.413334 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-17 01:17:01.413340 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-17 01:17:01.413346 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-17 01:17:01.413350 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-17 01:17:01.413354 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-17 01:17:01.413368 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-17 01:17:01.413373 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-17 01:17:01.413379 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-17 01:17:01.413383 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:17:01.413395 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:17:01.413401 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:17:01.413416 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-17 01:17:01.413420 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-17 01:17:01.413427 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-17 01:17:01.413431 | orchestrator | 2026-03-17 01:17:01.413435 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-03-17 01:17:01.413439 | orchestrator | Tuesday 17 March 2026 01:10:53 +0000 (0:00:03.229) 0:04:10.277 ********* 2026-03-17 01:17:01.413445 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-17 01:17:01.413449 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-17 01:17:01.413453 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-17 01:17:01.413457 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:17:01.413471 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-17 01:17:01.413478 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-17 01:17:01.413482 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-17 01:17:01.413486 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:17:01.413492 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-17 01:17:01.413496 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-17 01:17:01.413511 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-17 01:17:01.413515 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:17:01.413522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-17 01:17:01.413526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-17 01:17:01.413530 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:17:01.413534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-17 01:17:01.413540 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-17 01:17:01.413544 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:17:01.413548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-17 01:17:01.413552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-17 01:17:01.413555 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:17:01.413559 | orchestrator | 2026-03-17 01:17:01.413608 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-03-17 01:17:01.413617 | orchestrator | Tuesday 17 March 2026 01:10:54 +0000 (0:00:01.274) 0:04:11.552 ********* 2026-03-17 01:17:01.413630 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-17 01:17:01.413636 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-17 01:17:01.413644 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-17 01:17:01.413648 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:17:01.413654 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-17 01:17:01.413658 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-17 01:17:01.413676 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-17 01:17:01.413684 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:17:01.413688 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-17 01:17:01.413693 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-17 01:17:01.413700 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-17 01:17:01.413705 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:17:01.413709 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-17 01:17:01.413714 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-17 01:17:01.413721 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:17:01.413735 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-17 01:17:01.413740 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-17 01:17:01.413745 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:17:01.413749 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-17 01:17:01.413754 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-17 01:17:01.413758 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:17:01.413763 | orchestrator | 2026-03-17 01:17:01.413767 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-17 01:17:01.413773 | orchestrator | Tuesday 17 March 2026 01:10:56 +0000 (0:00:01.835) 0:04:13.388 ********* 2026-03-17 01:17:01.413778 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:17:01.413782 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:17:01.413786 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:17:01.413791 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 01:17:01.413795 | orchestrator | 2026-03-17 01:17:01.413800 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-03-17 01:17:01.413804 | orchestrator | Tuesday 17 March 2026 01:10:57 +0000 (0:00:00.999) 0:04:14.387 ********* 2026-03-17 01:17:01.413808 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-17 01:17:01.413813 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-17 01:17:01.413817 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-17 01:17:01.413822 | orchestrator | 2026-03-17 01:17:01.413826 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-03-17 01:17:01.413833 | orchestrator | Tuesday 17 March 2026 01:10:58 +0000 (0:00:01.036) 0:04:15.424 ********* 2026-03-17 01:17:01.413838 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-17 01:17:01.413842 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-17 01:17:01.413847 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-17 01:17:01.413851 | orchestrator | 2026-03-17 01:17:01.413855 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-03-17 01:17:01.413860 | orchestrator | Tuesday 17 March 2026 01:11:00 +0000 (0:00:01.352) 0:04:16.777 ********* 2026-03-17 01:17:01.413864 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:17:01.413868 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:17:01.413873 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:17:01.413877 | orchestrator | 2026-03-17 01:17:01.413881 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-03-17 01:17:01.413886 | orchestrator | Tuesday 17 March 2026 01:11:00 +0000 (0:00:00.526) 0:04:17.303 ********* 2026-03-17 01:17:01.413890 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:17:01.413895 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:17:01.413899 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:17:01.413903 | orchestrator | 2026-03-17 01:17:01.413907 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-03-17 01:17:01.413912 | orchestrator | Tuesday 17 March 2026 01:11:01 +0000 (0:00:00.514) 0:04:17.818 ********* 2026-03-17 01:17:01.413926 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-17 01:17:01.413932 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-17 01:17:01.413938 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-17 01:17:01.413946 | orchestrator | 2026-03-17 01:17:01.413955 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-03-17 01:17:01.413962 | orchestrator | Tuesday 17 March 2026 01:11:02 +0000 (0:00:01.317) 0:04:19.135 ********* 2026-03-17 01:17:01.413968 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-17 01:17:01.413975 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-17 01:17:01.413982 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-17 01:17:01.413988 | orchestrator | 2026-03-17 01:17:01.413995 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-03-17 01:17:01.414000 | orchestrator | Tuesday 17 March 2026 01:11:04 +0000 (0:00:01.632) 0:04:20.767 ********* 2026-03-17 01:17:01.414007 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-17 01:17:01.414038 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-17 01:17:01.414045 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-17 01:17:01.414051 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2026-03-17 01:17:01.414058 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2026-03-17 01:17:01.414064 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2026-03-17 01:17:01.414070 | orchestrator | 2026-03-17 01:17:01.414076 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-03-17 01:17:01.414081 | orchestrator | Tuesday 17 March 2026 01:11:07 +0000 (0:00:03.629) 0:04:24.397 ********* 2026-03-17 01:17:01.414087 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:17:01.414094 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:17:01.414100 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:17:01.414107 | orchestrator | 2026-03-17 01:17:01.414113 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-03-17 01:17:01.414120 | orchestrator | Tuesday 17 March 2026 01:11:08 +0000 (0:00:00.345) 0:04:24.742 ********* 2026-03-17 01:17:01.414126 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:17:01.414132 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:17:01.414139 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:17:01.414145 | orchestrator | 2026-03-17 01:17:01.414151 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-03-17 01:17:01.414162 | orchestrator | Tuesday 17 March 2026 01:11:08 +0000 (0:00:00.314) 0:04:25.056 ********* 2026-03-17 01:17:01.414168 | orchestrator | changed: [testbed-node-3] 2026-03-17 01:17:01.414175 | orchestrator | changed: [testbed-node-4] 2026-03-17 01:17:01.414181 | orchestrator | changed: [testbed-node-5] 2026-03-17 01:17:01.414187 | orchestrator | 2026-03-17 01:17:01.414193 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-03-17 01:17:01.414197 | orchestrator | Tuesday 17 March 2026 01:11:09 +0000 (0:00:01.403) 0:04:26.460 ********* 2026-03-17 01:17:01.414204 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-17 01:17:01.414211 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-17 01:17:01.414217 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-17 01:17:01.414230 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-17 01:17:01.414237 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-17 01:17:01.414244 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-17 01:17:01.414250 | orchestrator | 2026-03-17 01:17:01.414257 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-03-17 01:17:01.414264 | orchestrator | Tuesday 17 March 2026 01:11:12 +0000 (0:00:02.921) 0:04:29.381 ********* 2026-03-17 01:17:01.414270 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-17 01:17:01.414276 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-17 01:17:01.414282 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-17 01:17:01.414285 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-17 01:17:01.414289 | orchestrator | changed: [testbed-node-3] 2026-03-17 01:17:01.414295 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-17 01:17:01.414302 | orchestrator | changed: [testbed-node-4] 2026-03-17 01:17:01.414308 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-17 01:17:01.414314 | orchestrator | changed: [testbed-node-5] 2026-03-17 01:17:01.414319 | orchestrator | 2026-03-17 01:17:01.414326 | orchestrator | TASK [nova-cell : Include tasks from qemu_wrapper.yml] ************************* 2026-03-17 01:17:01.414331 | orchestrator | Tuesday 17 March 2026 01:11:15 +0000 (0:00:03.082) 0:04:32.464 ********* 2026-03-17 01:17:01.414337 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:17:01.414342 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:17:01.414347 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:17:01.414353 | orchestrator | included: /ansible/roles/nova-cell/tasks/qemu_wrapper.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 01:17:01.414359 | orchestrator | 2026-03-17 01:17:01.414366 | orchestrator | TASK [nova-cell : Check qemu wrapper file] ************************************* 2026-03-17 01:17:01.414372 | orchestrator | Tuesday 17 March 2026 01:11:17 +0000 (0:00:01.687) 0:04:34.152 ********* 2026-03-17 01:17:01.414377 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-17 01:17:01.414383 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-17 01:17:01.414418 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-17 01:17:01.414426 | orchestrator | 2026-03-17 01:17:01.414432 | orchestrator | TASK [nova-cell : Copy qemu wrapper] ******************************************* 2026-03-17 01:17:01.414438 | orchestrator | Tuesday 17 March 2026 01:11:18 +0000 (0:00:00.891) 0:04:35.044 ********* 2026-03-17 01:17:01.414444 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:17:01.414451 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:17:01.414457 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:17:01.414463 | orchestrator | 2026-03-17 01:17:01.414475 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-03-17 01:17:01.414482 | orchestrator | Tuesday 17 March 2026 01:11:18 +0000 (0:00:00.317) 0:04:35.361 ********* 2026-03-17 01:17:01.414488 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:17:01.414495 | orchestrator | 2026-03-17 01:17:01.414499 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-03-17 01:17:01.414503 | orchestrator | Tuesday 17 March 2026 01:11:18 +0000 (0:00:00.124) 0:04:35.486 ********* 2026-03-17 01:17:01.414507 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:17:01.414510 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:17:01.414514 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:17:01.414518 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:17:01.414522 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:17:01.414525 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:17:01.414529 | orchestrator | 2026-03-17 01:17:01.414533 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-03-17 01:17:01.414536 | orchestrator | Tuesday 17 March 2026 01:11:19 +0000 (0:00:00.736) 0:04:36.223 ********* 2026-03-17 01:17:01.414540 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-17 01:17:01.414544 | orchestrator | 2026-03-17 01:17:01.414548 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-03-17 01:17:01.414551 | orchestrator | Tuesday 17 March 2026 01:11:20 +0000 (0:00:00.725) 0:04:36.948 ********* 2026-03-17 01:17:01.414555 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:17:01.414559 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:17:01.414565 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:17:01.414584 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:17:01.414593 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:17:01.414598 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:17:01.414604 | orchestrator | 2026-03-17 01:17:01.414610 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-03-17 01:17:01.414615 | orchestrator | Tuesday 17 March 2026 01:11:20 +0000 (0:00:00.551) 0:04:37.499 ********* 2026-03-17 01:17:01.414626 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-17 01:17:01.414633 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-17 01:17:01.414645 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-17 01:17:01.414655 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-17 01:17:01.414661 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-17 01:17:01.414667 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-17 01:17:01.414682 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-17 01:17:01.414689 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-17 01:17:01.414696 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-17 01:17:01.414712 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:17:01.414719 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:17:01.414726 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:17:01.414732 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-17 01:17:01.414742 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-17 01:17:01.414746 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-17 01:17:01.414753 | orchestrator | 2026-03-17 01:17:01.414757 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-03-17 01:17:01.414761 | orchestrator | Tuesday 17 March 2026 01:11:24 +0000 (0:00:03.495) 0:04:40.995 ********* 2026-03-17 01:17:01.414767 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-17 01:17:01.414772 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-17 01:17:01.414776 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-17 01:17:01.414782 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-17 01:17:01.414786 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-17 01:17:01.414794 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-17 01:17:01.414801 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-17 01:17:01.414805 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-17 01:17:01.414809 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-17 01:17:01.414815 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-17 01:17:01.414822 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-17 01:17:01.414827 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-17 01:17:01.414832 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:17:01.414836 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:17:01.414839 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:17:01.414843 | orchestrator | 2026-03-17 01:17:01.414847 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-03-17 01:17:01.414851 | orchestrator | Tuesday 17 March 2026 01:11:30 +0000 (0:00:06.252) 0:04:47.247 ********* 2026-03-17 01:17:01.414855 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:17:01.414859 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:17:01.414862 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:17:01.414866 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:17:01.414870 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:17:01.414873 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:17:01.414877 | orchestrator | 2026-03-17 01:17:01.414881 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-03-17 01:17:01.414885 | orchestrator | Tuesday 17 March 2026 01:11:32 +0000 (0:00:01.666) 0:04:48.913 ********* 2026-03-17 01:17:01.414891 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-17 01:17:01.414896 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-17 01:17:01.414900 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-17 01:17:01.414904 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-17 01:17:01.414908 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-17 01:17:01.414912 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-17 01:17:01.414916 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:17:01.414919 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-17 01:17:01.414923 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:17:01.414927 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-17 01:17:01.414931 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-17 01:17:01.414935 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:17:01.414938 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-17 01:17:01.414942 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-17 01:17:01.414946 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-17 01:17:01.414950 | orchestrator | 2026-03-17 01:17:01.414953 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-03-17 01:17:01.414957 | orchestrator | Tuesday 17 March 2026 01:11:35 +0000 (0:00:03.326) 0:04:52.240 ********* 2026-03-17 01:17:01.414961 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:17:01.414965 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:17:01.414968 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:17:01.414972 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:17:01.414976 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:17:01.414980 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:17:01.414983 | orchestrator | 2026-03-17 01:17:01.414987 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-03-17 01:17:01.414993 | orchestrator | Tuesday 17 March 2026 01:11:36 +0000 (0:00:00.643) 0:04:52.883 ********* 2026-03-17 01:17:01.414997 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-17 01:17:01.415001 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-17 01:17:01.415005 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-17 01:17:01.415016 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-17 01:17:01.415024 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-17 01:17:01.415028 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-17 01:17:01.415032 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-17 01:17:01.415035 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-17 01:17:01.415039 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-17 01:17:01.415043 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:17:01.415047 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-17 01:17:01.415053 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-17 01:17:01.415057 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:17:01.415061 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-17 01:17:01.415065 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:17:01.415068 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-17 01:17:01.415072 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-17 01:17:01.415076 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-17 01:17:01.415080 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-17 01:17:01.415083 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-17 01:17:01.415087 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-17 01:17:01.415091 | orchestrator | 2026-03-17 01:17:01.415095 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-03-17 01:17:01.415100 | orchestrator | Tuesday 17 March 2026 01:11:41 +0000 (0:00:05.346) 0:04:58.230 ********* 2026-03-17 01:17:01.415113 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-17 01:17:01.415122 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-17 01:17:01.415127 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-17 01:17:01.415134 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-17 01:17:01.415140 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-17 01:17:01.415146 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-17 01:17:01.415152 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-17 01:17:01.415159 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-17 01:17:01.415165 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-17 01:17:01.415170 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-17 01:17:01.415177 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-17 01:17:01.415184 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-17 01:17:01.415190 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:17:01.415197 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-17 01:17:01.415204 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-17 01:17:01.415209 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-17 01:17:01.415213 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-17 01:17:01.415217 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:17:01.415224 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-17 01:17:01.415228 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-17 01:17:01.415231 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:17:01.415235 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-17 01:17:01.415243 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-17 01:17:01.415246 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-17 01:17:01.415250 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-17 01:17:01.415254 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-17 01:17:01.415258 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-17 01:17:01.415261 | orchestrator | 2026-03-17 01:17:01.415265 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-03-17 01:17:01.415269 | orchestrator | Tuesday 17 March 2026 01:11:48 +0000 (0:00:06.467) 0:05:04.697 ********* 2026-03-17 01:17:01.415272 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:17:01.415276 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:17:01.415280 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:17:01.415283 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:17:01.415287 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:17:01.415291 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:17:01.415295 | orchestrator | 2026-03-17 01:17:01.415298 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-03-17 01:17:01.415302 | orchestrator | Tuesday 17 March 2026 01:11:48 +0000 (0:00:00.533) 0:05:05.231 ********* 2026-03-17 01:17:01.415306 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:17:01.415309 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:17:01.415313 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:17:01.415317 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:17:01.415320 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:17:01.415324 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:17:01.415328 | orchestrator | 2026-03-17 01:17:01.415332 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-03-17 01:17:01.415335 | orchestrator | Tuesday 17 March 2026 01:11:49 +0000 (0:00:00.720) 0:05:05.952 ********* 2026-03-17 01:17:01.415339 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:17:01.415343 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:17:01.415346 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:17:01.415350 | orchestrator | changed: [testbed-node-3] 2026-03-17 01:17:01.415354 | orchestrator | changed: [testbed-node-4] 2026-03-17 01:17:01.415358 | orchestrator | changed: [testbed-node-5] 2026-03-17 01:17:01.415361 | orchestrator | 2026-03-17 01:17:01.415365 | orchestrator | TASK [nova-cell : Generating 'hostid' file for nova_compute] ******************* 2026-03-17 01:17:01.415370 | orchestrator | Tuesday 17 March 2026 01:11:51 +0000 (0:00:01.888) 0:05:07.840 ********* 2026-03-17 01:17:01.415376 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:17:01.415382 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:17:01.415388 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:17:01.415394 | orchestrator | changed: [testbed-node-3] 2026-03-17 01:17:01.415400 | orchestrator | changed: [testbed-node-4] 2026-03-17 01:17:01.415406 | orchestrator | changed: [testbed-node-5] 2026-03-17 01:17:01.415411 | orchestrator | 2026-03-17 01:17:01.415417 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-03-17 01:17:01.415424 | orchestrator | Tuesday 17 March 2026 01:11:53 +0000 (0:00:02.151) 0:05:09.992 ********* 2026-03-17 01:17:01.415434 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-17 01:17:01.415450 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-17 01:17:01.415457 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-17 01:17:01.415464 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-17 01:17:01.415471 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-17 01:17:01.415476 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:17:01.415485 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-17 01:17:01.415496 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-17 01:17:01.415503 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:17:01.415514 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-17 01:17:01.415521 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-17 01:17:01.415528 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:17:01.415534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-17 01:17:01.415541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-17 01:17:01.415547 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:17:01.415557 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-17 01:17:01.415564 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-17 01:17:01.415568 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:17:01.415585 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-17 01:17:01.415592 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-17 01:17:01.415596 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:17:01.415600 | orchestrator | 2026-03-17 01:17:01.415604 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-03-17 01:17:01.415608 | orchestrator | Tuesday 17 March 2026 01:11:54 +0000 (0:00:01.426) 0:05:11.418 ********* 2026-03-17 01:17:01.415612 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-03-17 01:17:01.415616 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-03-17 01:17:01.415619 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:17:01.415623 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-03-17 01:17:01.415627 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-03-17 01:17:01.415631 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:17:01.415634 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-03-17 01:17:01.415638 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-03-17 01:17:01.415642 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:17:01.415645 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-03-17 01:17:01.415649 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-03-17 01:17:01.415653 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:17:01.415657 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-03-17 01:17:01.415660 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-03-17 01:17:01.415664 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:17:01.415668 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-03-17 01:17:01.415672 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-03-17 01:17:01.415675 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:17:01.415682 | orchestrator | 2026-03-17 01:17:01.415686 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2026-03-17 01:17:01.415689 | orchestrator | Tuesday 17 March 2026 01:11:55 +0000 (0:00:00.865) 0:05:12.284 ********* 2026-03-17 01:17:01.415695 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-17 01:17:01.415700 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-17 01:17:01.415708 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-17 01:17:01.415712 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-17 01:17:01.415716 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-17 01:17:01.415723 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-17 01:17:01.415729 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-17 01:17:01.415733 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-17 01:17:01.415737 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-17 01:17:01.415743 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:17:01.415747 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:17:01.415751 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-17 01:17:01.415760 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-17 01:17:01.415764 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-17 01:17:01.415768 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:17:01.415772 | orchestrator | 2026-03-17 01:17:01.415776 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-17 01:17:01.415780 | orchestrator | Tuesday 17 March 2026 01:11:58 +0000 (0:00:02.663) 0:05:14.947 ********* 2026-03-17 01:17:01.415786 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:17:01.415790 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:17:01.415793 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:17:01.415797 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:17:01.415801 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:17:01.415805 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:17:01.415808 | orchestrator | 2026-03-17 01:17:01.415812 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-17 01:17:01.415816 | orchestrator | Tuesday 17 March 2026 01:11:58 +0000 (0:00:00.627) 0:05:15.575 ********* 2026-03-17 01:17:01.415820 | orchestrator | 2026-03-17 01:17:01.415823 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-17 01:17:01.415827 | orchestrator | Tuesday 17 March 2026 01:11:59 +0000 (0:00:00.131) 0:05:15.707 ********* 2026-03-17 01:17:01.415831 | orchestrator | 2026-03-17 01:17:01.415834 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-17 01:17:01.415838 | orchestrator | Tuesday 17 March 2026 01:11:59 +0000 (0:00:00.126) 0:05:15.833 ********* 2026-03-17 01:17:01.415844 | orchestrator | 2026-03-17 01:17:01.415848 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-17 01:17:01.415852 | orchestrator | Tuesday 17 March 2026 01:11:59 +0000 (0:00:00.143) 0:05:15.977 ********* 2026-03-17 01:17:01.415855 | orchestrator | 2026-03-17 01:17:01.415859 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-17 01:17:01.415863 | orchestrator | Tuesday 17 March 2026 01:11:59 +0000 (0:00:00.126) 0:05:16.104 ********* 2026-03-17 01:17:01.415867 | orchestrator | 2026-03-17 01:17:01.415870 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-17 01:17:01.415874 | orchestrator | Tuesday 17 March 2026 01:11:59 +0000 (0:00:00.225) 0:05:16.329 ********* 2026-03-17 01:17:01.415878 | orchestrator | 2026-03-17 01:17:01.415881 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-03-17 01:17:01.415885 | orchestrator | Tuesday 17 March 2026 01:11:59 +0000 (0:00:00.132) 0:05:16.461 ********* 2026-03-17 01:17:01.415889 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:17:01.415893 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:17:01.415896 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:17:01.415900 | orchestrator | 2026-03-17 01:17:01.415904 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-03-17 01:17:01.415908 | orchestrator | Tuesday 17 March 2026 01:12:11 +0000 (0:00:11.319) 0:05:27.781 ********* 2026-03-17 01:17:01.415911 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:17:01.415915 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:17:01.415919 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:17:01.415923 | orchestrator | 2026-03-17 01:17:01.415926 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-03-17 01:17:01.415930 | orchestrator | Tuesday 17 March 2026 01:12:27 +0000 (0:00:16.353) 0:05:44.134 ********* 2026-03-17 01:17:01.415934 | orchestrator | changed: [testbed-node-4] 2026-03-17 01:17:01.415938 | orchestrator | changed: [testbed-node-5] 2026-03-17 01:17:01.415941 | orchestrator | changed: [testbed-node-3] 2026-03-17 01:17:01.415945 | orchestrator | 2026-03-17 01:17:01.415949 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-03-17 01:17:01.415953 | orchestrator | Tuesday 17 March 2026 01:12:47 +0000 (0:00:20.419) 0:06:04.554 ********* 2026-03-17 01:17:01.415956 | orchestrator | changed: [testbed-node-4] 2026-03-17 01:17:01.415960 | orchestrator | changed: [testbed-node-5] 2026-03-17 01:17:01.415964 | orchestrator | changed: [testbed-node-3] 2026-03-17 01:17:01.415967 | orchestrator | 2026-03-17 01:17:01.415971 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-03-17 01:17:01.415975 | orchestrator | Tuesday 17 March 2026 01:13:15 +0000 (0:00:27.656) 0:06:32.210 ********* 2026-03-17 01:17:01.415982 | orchestrator | FAILED - RETRYING: [testbed-node-4]: Checking libvirt container is ready (10 retries left). 2026-03-17 01:17:01.415986 | orchestrator | FAILED - RETRYING: [testbed-node-3]: Checking libvirt container is ready (10 retries left). 2026-03-17 01:17:01.415989 | orchestrator | FAILED - RETRYING: [testbed-node-5]: Checking libvirt container is ready (10 retries left). 2026-03-17 01:17:01.415993 | orchestrator | changed: [testbed-node-3] 2026-03-17 01:17:01.415997 | orchestrator | changed: [testbed-node-4] 2026-03-17 01:17:01.416001 | orchestrator | changed: [testbed-node-5] 2026-03-17 01:17:01.416004 | orchestrator | 2026-03-17 01:17:01.416008 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-03-17 01:17:01.416012 | orchestrator | Tuesday 17 March 2026 01:13:21 +0000 (0:00:06.213) 0:06:38.423 ********* 2026-03-17 01:17:01.416016 | orchestrator | changed: [testbed-node-3] 2026-03-17 01:17:01.416019 | orchestrator | changed: [testbed-node-4] 2026-03-17 01:17:01.416023 | orchestrator | changed: [testbed-node-5] 2026-03-17 01:17:01.416027 | orchestrator | 2026-03-17 01:17:01.416030 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-03-17 01:17:01.416034 | orchestrator | Tuesday 17 March 2026 01:13:22 +0000 (0:00:00.702) 0:06:39.126 ********* 2026-03-17 01:17:01.416038 | orchestrator | changed: [testbed-node-4] 2026-03-17 01:17:01.416044 | orchestrator | changed: [testbed-node-5] 2026-03-17 01:17:01.416047 | orchestrator | changed: [testbed-node-3] 2026-03-17 01:17:01.416051 | orchestrator | 2026-03-17 01:17:01.416055 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-03-17 01:17:01.416059 | orchestrator | Tuesday 17 March 2026 01:13:51 +0000 (0:00:28.561) 0:07:07.688 ********* 2026-03-17 01:17:01.416062 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:17:01.416066 | orchestrator | 2026-03-17 01:17:01.416070 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-03-17 01:17:01.416073 | orchestrator | Tuesday 17 March 2026 01:13:51 +0000 (0:00:00.302) 0:07:07.990 ********* 2026-03-17 01:17:01.416077 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:17:01.416081 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:17:01.416084 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:17:01.416088 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:17:01.416092 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:17:01.416098 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2026-03-17 01:17:01.416102 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-17 01:17:01.416106 | orchestrator | 2026-03-17 01:17:01.416109 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-03-17 01:17:01.416113 | orchestrator | Tuesday 17 March 2026 01:14:11 +0000 (0:00:19.858) 0:07:27.849 ********* 2026-03-17 01:17:01.416117 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:17:01.416120 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:17:01.416124 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:17:01.416128 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:17:01.416132 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:17:01.416135 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:17:01.416139 | orchestrator | 2026-03-17 01:17:01.416143 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-03-17 01:17:01.416147 | orchestrator | Tuesday 17 March 2026 01:14:19 +0000 (0:00:08.175) 0:07:36.024 ********* 2026-03-17 01:17:01.416150 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:17:01.416154 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:17:01.416158 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:17:01.416162 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:17:01.416165 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:17:01.416169 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-5 2026-03-17 01:17:01.416173 | orchestrator | 2026-03-17 01:17:01.416176 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-17 01:17:01.416180 | orchestrator | Tuesday 17 March 2026 01:14:23 +0000 (0:00:04.240) 0:07:40.264 ********* 2026-03-17 01:17:01.416184 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-17 01:17:01.416188 | orchestrator | 2026-03-17 01:17:01.416191 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-17 01:17:01.416195 | orchestrator | Tuesday 17 March 2026 01:14:37 +0000 (0:00:13.525) 0:07:53.789 ********* 2026-03-17 01:17:01.416199 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-17 01:17:01.416202 | orchestrator | 2026-03-17 01:17:01.416206 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-03-17 01:17:01.416210 | orchestrator | Tuesday 17 March 2026 01:14:38 +0000 (0:00:01.435) 0:07:55.225 ********* 2026-03-17 01:17:01.416214 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:17:01.416217 | orchestrator | 2026-03-17 01:17:01.416221 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-03-17 01:17:01.416225 | orchestrator | Tuesday 17 March 2026 01:14:40 +0000 (0:00:01.508) 0:07:56.734 ********* 2026-03-17 01:17:01.416228 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-17 01:17:01.416233 | orchestrator | 2026-03-17 01:17:01.416243 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2026-03-17 01:17:01.416250 | orchestrator | Tuesday 17 March 2026 01:14:52 +0000 (0:00:12.698) 0:08:09.433 ********* 2026-03-17 01:17:01.416256 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:17:01.416262 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:17:01.416268 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:17:01.416274 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:17:01.416281 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:17:01.416287 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:17:01.416293 | orchestrator | 2026-03-17 01:17:01.416299 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-03-17 01:17:01.416305 | orchestrator | 2026-03-17 01:17:01.416309 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-03-17 01:17:01.416313 | orchestrator | Tuesday 17 March 2026 01:14:54 +0000 (0:00:01.813) 0:08:11.246 ********* 2026-03-17 01:17:01.416316 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:17:01.416322 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:17:01.416327 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:17:01.416333 | orchestrator | 2026-03-17 01:17:01.416339 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-03-17 01:17:01.416346 | orchestrator | 2026-03-17 01:17:01.416352 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-03-17 01:17:01.416358 | orchestrator | Tuesday 17 March 2026 01:14:55 +0000 (0:00:01.067) 0:08:12.314 ********* 2026-03-17 01:17:01.416365 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:17:01.416371 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:17:01.416377 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:17:01.416384 | orchestrator | 2026-03-17 01:17:01.416391 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-03-17 01:17:01.416397 | orchestrator | 2026-03-17 01:17:01.416404 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-03-17 01:17:01.416410 | orchestrator | Tuesday 17 March 2026 01:14:56 +0000 (0:00:00.505) 0:08:12.819 ********* 2026-03-17 01:17:01.416416 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-03-17 01:17:01.416420 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-03-17 01:17:01.416423 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-03-17 01:17:01.416429 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-03-17 01:17:01.416435 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-03-17 01:17:01.416442 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-03-17 01:17:01.416448 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-03-17 01:17:01.416455 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-03-17 01:17:01.416461 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-03-17 01:17:01.416467 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-03-17 01:17:01.416473 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-03-17 01:17:01.416480 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-03-17 01:17:01.416486 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:17:01.416493 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-03-17 01:17:01.416500 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-03-17 01:17:01.416507 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-03-17 01:17:01.416511 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-03-17 01:17:01.416516 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-03-17 01:17:01.416523 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-03-17 01:17:01.416529 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:17:01.416536 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-03-17 01:17:01.416546 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-03-17 01:17:01.416553 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-03-17 01:17:01.416559 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-03-17 01:17:01.416566 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-03-17 01:17:01.416602 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-03-17 01:17:01.416609 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:17:01.416615 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-03-17 01:17:01.416622 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-03-17 01:17:01.416628 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-03-17 01:17:01.416635 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-03-17 01:17:01.416641 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-03-17 01:17:01.416647 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-03-17 01:17:01.416654 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:17:01.416660 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:17:01.416666 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-03-17 01:17:01.416672 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-03-17 01:17:01.416678 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-03-17 01:17:01.416685 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-03-17 01:17:01.416691 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-03-17 01:17:01.416697 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-03-17 01:17:01.416703 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:17:01.416709 | orchestrator | 2026-03-17 01:17:01.416715 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-03-17 01:17:01.416722 | orchestrator | 2026-03-17 01:17:01.416728 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-03-17 01:17:01.416735 | orchestrator | Tuesday 17 March 2026 01:14:57 +0000 (0:00:01.267) 0:08:14.087 ********* 2026-03-17 01:17:01.416740 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-03-17 01:17:01.416744 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-03-17 01:17:01.416747 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:17:01.416751 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-03-17 01:17:01.416755 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-03-17 01:17:01.416759 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:17:01.416762 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-03-17 01:17:01.416766 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-03-17 01:17:01.416770 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:17:01.416773 | orchestrator | 2026-03-17 01:17:01.416777 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-03-17 01:17:01.416781 | orchestrator | 2026-03-17 01:17:01.416787 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-03-17 01:17:01.416791 | orchestrator | Tuesday 17 March 2026 01:14:58 +0000 (0:00:00.710) 0:08:14.797 ********* 2026-03-17 01:17:01.416795 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:17:01.416798 | orchestrator | 2026-03-17 01:17:01.416802 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-03-17 01:17:01.416806 | orchestrator | 2026-03-17 01:17:01.416810 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-03-17 01:17:01.416813 | orchestrator | Tuesday 17 March 2026 01:14:58 +0000 (0:00:00.639) 0:08:15.437 ********* 2026-03-17 01:17:01.416817 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:17:01.416821 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:17:01.416825 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:17:01.416828 | orchestrator | 2026-03-17 01:17:01.416835 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 01:17:01.416839 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 01:17:01.416843 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=46  rescued=0 ignored=0 2026-03-17 01:17:01.416848 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=53  rescued=0 ignored=0 2026-03-17 01:17:01.416851 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=53  rescued=0 ignored=0 2026-03-17 01:17:01.416855 | orchestrator | testbed-node-3 : ok=41  changed=28  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-03-17 01:17:01.416859 | orchestrator | testbed-node-4 : ok=40  changed=28  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-17 01:17:01.416866 | orchestrator | testbed-node-5 : ok=45  changed=28  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-03-17 01:17:01.416869 | orchestrator | 2026-03-17 01:17:01.416873 | orchestrator | 2026-03-17 01:17:01.416877 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 01:17:01.416881 | orchestrator | Tuesday 17 March 2026 01:14:59 +0000 (0:00:00.558) 0:08:15.996 ********* 2026-03-17 01:17:01.416885 | orchestrator | =============================================================================== 2026-03-17 01:17:01.416888 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 30.05s 2026-03-17 01:17:01.416892 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 28.56s 2026-03-17 01:17:01.416896 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 27.66s 2026-03-17 01:17:01.416900 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 21.01s 2026-03-17 01:17:01.416903 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 20.42s 2026-03-17 01:17:01.416907 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 19.86s 2026-03-17 01:17:01.416911 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 17.01s 2026-03-17 01:17:01.416914 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 16.73s 2026-03-17 01:17:01.416918 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 16.35s 2026-03-17 01:17:01.416922 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 14.44s 2026-03-17 01:17:01.416925 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 14.19s 2026-03-17 01:17:01.416929 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.53s 2026-03-17 01:17:01.416933 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 12.70s 2026-03-17 01:17:01.416937 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.55s 2026-03-17 01:17:01.416940 | orchestrator | nova-cell : Create cell ------------------------------------------------ 12.02s 2026-03-17 01:17:01.416944 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 11.32s 2026-03-17 01:17:01.416948 | orchestrator | nova : Restart nova-api container --------------------------------------- 9.64s 2026-03-17 01:17:01.416952 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 8.18s 2026-03-17 01:17:01.416955 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 7.79s 2026-03-17 01:17:01.416959 | orchestrator | service-ks-register : nova | Creating endpoints ------------------------- 6.80s 2026-03-17 01:17:01.416963 | orchestrator | 2026-03-17 01:17:01 | INFO  | Task 8016a7ad-1a61-436b-b321-b20f53721e39 is in state SUCCESS 2026-03-17 01:17:01.416969 | orchestrator | 2026-03-17 01:17:01 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:17:04.446352 | orchestrator | 2026-03-17 01:17:04 | INFO  | Task a92a785e-92a1-4140-bdb5-3e07a55a14db is in state SUCCESS 2026-03-17 01:17:04.447229 | orchestrator | 2026-03-17 01:17:04 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-17 01:17:04.448135 | orchestrator | 2026-03-17 01:17:04.448175 | orchestrator | 2026-03-17 01:17:04.448192 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-17 01:17:04.448197 | orchestrator | 2026-03-17 01:17:04.448201 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-17 01:17:04.448206 | orchestrator | Tuesday 17 March 2026 01:10:38 +0000 (0:00:00.275) 0:00:00.275 ********* 2026-03-17 01:17:04.448210 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:17:04.448214 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:17:04.448219 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:17:04.448222 | orchestrator | 2026-03-17 01:17:04.448226 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-17 01:17:04.448230 | orchestrator | Tuesday 17 March 2026 01:10:38 +0000 (0:00:00.242) 0:00:00.517 ********* 2026-03-17 01:17:04.448234 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-03-17 01:17:04.448238 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-03-17 01:17:04.448242 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-03-17 01:17:04.448245 | orchestrator | 2026-03-17 01:17:04.448249 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-03-17 01:17:04.448253 | orchestrator | 2026-03-17 01:17:04.448257 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-17 01:17:04.448261 | orchestrator | Tuesday 17 March 2026 01:10:38 +0000 (0:00:00.283) 0:00:00.801 ********* 2026-03-17 01:17:04.448265 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:17:04.448269 | orchestrator | 2026-03-17 01:17:04.448273 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2026-03-17 01:17:04.448277 | orchestrator | Tuesday 17 March 2026 01:10:39 +0000 (0:00:00.560) 0:00:01.362 ********* 2026-03-17 01:17:04.448281 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2026-03-17 01:17:04.448285 | orchestrator | 2026-03-17 01:17:04.448288 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2026-03-17 01:17:04.448292 | orchestrator | Tuesday 17 March 2026 01:10:44 +0000 (0:00:04.752) 0:00:06.115 ********* 2026-03-17 01:17:04.448296 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2026-03-17 01:17:04.448300 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2026-03-17 01:17:04.448304 | orchestrator | 2026-03-17 01:17:04.448307 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2026-03-17 01:17:04.448311 | orchestrator | Tuesday 17 March 2026 01:10:49 +0000 (0:00:05.555) 0:00:11.671 ********* 2026-03-17 01:17:04.448316 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-17 01:17:04.448320 | orchestrator | 2026-03-17 01:17:04.448323 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2026-03-17 01:17:04.448327 | orchestrator | Tuesday 17 March 2026 01:10:53 +0000 (0:00:03.220) 0:00:14.892 ********* 2026-03-17 01:17:04.448331 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-03-17 01:17:04.448335 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-03-17 01:17:04.448339 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-17 01:17:04.448343 | orchestrator | 2026-03-17 01:17:04.448347 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2026-03-17 01:17:04.448350 | orchestrator | Tuesday 17 March 2026 01:11:01 +0000 (0:00:07.963) 0:00:22.855 ********* 2026-03-17 01:17:04.448408 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-17 01:17:04.448414 | orchestrator | 2026-03-17 01:17:04.448418 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2026-03-17 01:17:04.448422 | orchestrator | Tuesday 17 March 2026 01:11:05 +0000 (0:00:04.147) 0:00:27.002 ********* 2026-03-17 01:17:04.448425 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2026-03-17 01:17:04.448429 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2026-03-17 01:17:04.448433 | orchestrator | 2026-03-17 01:17:04.448437 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2026-03-17 01:17:04.448440 | orchestrator | Tuesday 17 March 2026 01:11:11 +0000 (0:00:06.489) 0:00:33.492 ********* 2026-03-17 01:17:04.448444 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2026-03-17 01:17:04.448448 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2026-03-17 01:17:04.448451 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2026-03-17 01:17:04.448455 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2026-03-17 01:17:04.448459 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2026-03-17 01:17:04.448463 | orchestrator | 2026-03-17 01:17:04.448466 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-17 01:17:04.448470 | orchestrator | Tuesday 17 March 2026 01:11:27 +0000 (0:00:16.082) 0:00:49.575 ********* 2026-03-17 01:17:04.448474 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:17:04.448478 | orchestrator | 2026-03-17 01:17:04.448481 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2026-03-17 01:17:04.448485 | orchestrator | Tuesday 17 March 2026 01:11:28 +0000 (0:00:00.702) 0:00:50.277 ********* 2026-03-17 01:17:04.448489 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:17:04.448493 | orchestrator | 2026-03-17 01:17:04.448497 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2026-03-17 01:17:04.448501 | orchestrator | Tuesday 17 March 2026 01:11:33 +0000 (0:00:05.183) 0:00:55.461 ********* 2026-03-17 01:17:04.448504 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:17:04.448508 | orchestrator | 2026-03-17 01:17:04.448512 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-03-17 01:17:04.448524 | orchestrator | Tuesday 17 March 2026 01:11:38 +0000 (0:00:04.673) 0:01:00.135 ********* 2026-03-17 01:17:04.448528 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:17:04.448532 | orchestrator | 2026-03-17 01:17:04.448539 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2026-03-17 01:17:04.448543 | orchestrator | Tuesday 17 March 2026 01:11:42 +0000 (0:00:03.971) 0:01:04.107 ********* 2026-03-17 01:17:04.448547 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-03-17 01:17:04.448551 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-03-17 01:17:04.448555 | orchestrator | 2026-03-17 01:17:04.448558 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2026-03-17 01:17:04.448562 | orchestrator | Tuesday 17 March 2026 01:11:51 +0000 (0:00:09.630) 0:01:13.737 ********* 2026-03-17 01:17:04.448989 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2026-03-17 01:17:04.449011 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2026-03-17 01:17:04.449020 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2026-03-17 01:17:04.449029 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2026-03-17 01:17:04.449035 | orchestrator | 2026-03-17 01:17:04.449042 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2026-03-17 01:17:04.449059 | orchestrator | Tuesday 17 March 2026 01:12:06 +0000 (0:00:14.870) 0:01:28.608 ********* 2026-03-17 01:17:04.449064 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:17:04.449070 | orchestrator | 2026-03-17 01:17:04.449460 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2026-03-17 01:17:04.449484 | orchestrator | Tuesday 17 March 2026 01:12:10 +0000 (0:00:03.929) 0:01:32.537 ********* 2026-03-17 01:17:04.449491 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:17:04.449498 | orchestrator | 2026-03-17 01:17:04.449507 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2026-03-17 01:17:04.449514 | orchestrator | Tuesday 17 March 2026 01:12:15 +0000 (0:00:04.562) 0:01:37.100 ********* 2026-03-17 01:17:04.449523 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:17:04.449529 | orchestrator | 2026-03-17 01:17:04.449535 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2026-03-17 01:17:04.449540 | orchestrator | Tuesday 17 March 2026 01:12:15 +0000 (0:00:00.203) 0:01:37.304 ********* 2026-03-17 01:17:04.449546 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:17:04.449553 | orchestrator | 2026-03-17 01:17:04.449558 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-17 01:17:04.449637 | orchestrator | Tuesday 17 March 2026 01:12:20 +0000 (0:00:04.953) 0:01:42.258 ********* 2026-03-17 01:17:04.449645 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:17:04.449689 | orchestrator | 2026-03-17 01:17:04.449698 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2026-03-17 01:17:04.449705 | orchestrator | Tuesday 17 March 2026 01:12:21 +0000 (0:00:00.727) 0:01:42.985 ********* 2026-03-17 01:17:04.449709 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:17:04.449713 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:17:04.449717 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:17:04.449721 | orchestrator | 2026-03-17 01:17:04.449725 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2026-03-17 01:17:04.449744 | orchestrator | Tuesday 17 March 2026 01:12:26 +0000 (0:00:05.213) 0:01:48.198 ********* 2026-03-17 01:17:04.449748 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:17:04.449752 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:17:04.449756 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:17:04.449761 | orchestrator | 2026-03-17 01:17:04.449765 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2026-03-17 01:17:04.449769 | orchestrator | Tuesday 17 March 2026 01:12:30 +0000 (0:00:03.662) 0:01:51.861 ********* 2026-03-17 01:17:04.449773 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:17:04.449777 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:17:04.449781 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:17:04.449784 | orchestrator | 2026-03-17 01:17:04.449788 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2026-03-17 01:17:04.449792 | orchestrator | Tuesday 17 March 2026 01:12:30 +0000 (0:00:00.809) 0:01:52.670 ********* 2026-03-17 01:17:04.449796 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:17:04.449800 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:17:04.449804 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:17:04.449807 | orchestrator | 2026-03-17 01:17:04.449811 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2026-03-17 01:17:04.449815 | orchestrator | Tuesday 17 March 2026 01:12:32 +0000 (0:00:01.782) 0:01:54.453 ********* 2026-03-17 01:17:04.449818 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:17:04.449840 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:17:04.449844 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:17:04.449848 | orchestrator | 2026-03-17 01:17:04.449852 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2026-03-17 01:17:04.449855 | orchestrator | Tuesday 17 March 2026 01:12:33 +0000 (0:00:01.148) 0:01:55.601 ********* 2026-03-17 01:17:04.449859 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:17:04.449873 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:17:04.449877 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:17:04.449881 | orchestrator | 2026-03-17 01:17:04.449884 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2026-03-17 01:17:04.449888 | orchestrator | Tuesday 17 March 2026 01:12:34 +0000 (0:00:01.087) 0:01:56.689 ********* 2026-03-17 01:17:04.449892 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:17:04.449896 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:17:04.449899 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:17:04.449903 | orchestrator | 2026-03-17 01:17:04.449932 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2026-03-17 01:17:04.449946 | orchestrator | Tuesday 17 March 2026 01:12:36 +0000 (0:00:02.028) 0:01:58.717 ********* 2026-03-17 01:17:04.449950 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:17:04.449954 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:17:04.449957 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:17:04.449961 | orchestrator | 2026-03-17 01:17:04.449965 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2026-03-17 01:17:04.449969 | orchestrator | Tuesday 17 March 2026 01:12:38 +0000 (0:00:01.503) 0:02:00.221 ********* 2026-03-17 01:17:04.449972 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:17:04.449976 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:17:04.449980 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:17:04.449984 | orchestrator | 2026-03-17 01:17:04.449988 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2026-03-17 01:17:04.449992 | orchestrator | Tuesday 17 March 2026 01:12:38 +0000 (0:00:00.551) 0:02:00.773 ********* 2026-03-17 01:17:04.449995 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:17:04.449999 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:17:04.450003 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:17:04.450007 | orchestrator | 2026-03-17 01:17:04.450042 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-17 01:17:04.450047 | orchestrator | Tuesday 17 March 2026 01:12:41 +0000 (0:00:02.441) 0:02:03.215 ********* 2026-03-17 01:17:04.450051 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:17:04.450055 | orchestrator | 2026-03-17 01:17:04.450059 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-03-17 01:17:04.450063 | orchestrator | Tuesday 17 March 2026 01:12:42 +0000 (0:00:00.657) 0:02:03.872 ********* 2026-03-17 01:17:04.450066 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:17:04.450070 | orchestrator | 2026-03-17 01:17:04.450074 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-03-17 01:17:04.450078 | orchestrator | Tuesday 17 March 2026 01:12:45 +0000 (0:00:03.334) 0:02:07.206 ********* 2026-03-17 01:17:04.450081 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:17:04.450085 | orchestrator | 2026-03-17 01:17:04.450089 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-03-17 01:17:04.450093 | orchestrator | Tuesday 17 March 2026 01:12:48 +0000 (0:00:02.743) 0:02:09.950 ********* 2026-03-17 01:17:04.450097 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-03-17 01:17:04.450101 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-03-17 01:17:04.450105 | orchestrator | 2026-03-17 01:17:04.450109 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-03-17 01:17:04.450112 | orchestrator | Tuesday 17 March 2026 01:12:55 +0000 (0:00:07.165) 0:02:17.116 ********* 2026-03-17 01:17:04.450130 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:17:04.450134 | orchestrator | 2026-03-17 01:17:04.450138 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-03-17 01:17:04.450142 | orchestrator | Tuesday 17 March 2026 01:12:58 +0000 (0:00:03.278) 0:02:20.394 ********* 2026-03-17 01:17:04.450145 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:17:04.450149 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:17:04.450153 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:17:04.450162 | orchestrator | 2026-03-17 01:17:04.450165 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-03-17 01:17:04.450169 | orchestrator | Tuesday 17 March 2026 01:12:58 +0000 (0:00:00.392) 0:02:20.786 ********* 2026-03-17 01:17:04.450176 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-17 01:17:04.450203 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-17 01:17:04.450208 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-17 01:17:04.450213 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-17 01:17:04.450218 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-17 01:17:04.450226 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-17 01:17:04.450231 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-17 01:17:04.450236 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-17 01:17:04.450254 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-17 01:17:04.450260 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-17 01:17:04.450264 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-17 01:17:04.450268 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-17 01:17:04.450276 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:17:04.450282 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:17:04.450287 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:17:04.450291 | orchestrator | 2026-03-17 01:17:04.450296 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-03-17 01:17:04.450301 | orchestrator | Tuesday 17 March 2026 01:13:01 +0000 (0:00:02.598) 0:02:23.385 ********* 2026-03-17 01:17:04.450305 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:17:04.450309 | orchestrator | 2026-03-17 01:17:04.450323 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-03-17 01:17:04.450331 | orchestrator | Tuesday 17 March 2026 01:13:01 +0000 (0:00:00.151) 0:02:23.537 ********* 2026-03-17 01:17:04.450336 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:17:04.450340 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:17:04.450345 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:17:04.450349 | orchestrator | 2026-03-17 01:17:04.450353 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-03-17 01:17:04.450358 | orchestrator | Tuesday 17 March 2026 01:13:02 +0000 (0:00:00.317) 0:02:23.854 ********* 2026-03-17 01:17:04.450363 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-17 01:17:04.450371 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-17 01:17:04.450376 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-17 01:17:04.450381 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-17 01:17:04.450386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:17:04.450390 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:17:04.450409 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-17 01:17:04.450415 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-17 01:17:04.450426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-17 01:17:04.450431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-17 01:17:04.450436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:17:04.450441 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:17:04.450445 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-17 01:17:04.450465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-17 01:17:04.450471 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-17 01:17:04.450483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-17 01:17:04.450488 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:17:04.450492 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:17:04.450497 | orchestrator | 2026-03-17 01:17:04.450501 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-17 01:17:04.450506 | orchestrator | Tuesday 17 March 2026 01:13:02 +0000 (0:00:00.678) 0:02:24.532 ********* 2026-03-17 01:17:04.450510 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:17:04.450515 | orchestrator | 2026-03-17 01:17:04.450520 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-03-17 01:17:04.450524 | orchestrator | Tuesday 17 March 2026 01:13:03 +0000 (0:00:00.639) 0:02:25.171 ********* 2026-03-17 01:17:04.450529 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-17 01:17:04.450548 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-17 01:17:04.450553 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-17 01:17:04.450562 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-17 01:17:04.450647 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-17 01:17:04.450657 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-17 01:17:04.450664 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-17 01:17:04.450669 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-17 01:17:04.450681 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-17 01:17:04.450690 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-17 01:17:04.450695 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-17 01:17:04.450700 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-17 01:17:04.450704 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:17:04.450710 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:17:04.450724 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:17:04.450735 | orchestrator | 2026-03-17 01:17:04.450745 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-03-17 01:17:04.450758 | orchestrator | Tuesday 17 March 2026 01:13:07 +0000 (0:00:04.296) 0:02:29.467 ********* 2026-03-17 01:17:04.450764 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-17 01:17:04.450770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-17 01:17:04.450777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-17 01:17:04.450783 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-17 01:17:04.450790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:17:04.450797 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:17:04.450813 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-17 01:17:04.450824 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-17 01:17:04.450831 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-17 01:17:04.450837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-17 01:17:04.450843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:17:04.450850 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:17:04.450856 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-17 01:17:04.450863 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-17 01:17:04.450881 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-17 01:17:04.450888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-17 01:17:04.450895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:17:04.450903 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:17:04.450908 | orchestrator | 2026-03-17 01:17:04.450912 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-03-17 01:17:04.450916 | orchestrator | Tuesday 17 March 2026 01:13:08 +0000 (0:00:00.733) 0:02:30.201 ********* 2026-03-17 01:17:04.450920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-17 01:17:04.450924 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-17 01:17:04.450932 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-17 01:17:04.450950 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-17 01:17:04.450955 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:17:04.450959 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:17:04.450963 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-17 01:17:04.450967 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-17 01:17:04.450972 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-17 01:17:04.450982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-17 01:17:04.450992 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:17:04.450996 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:17:04.451000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-17 01:17:04.451005 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-17 01:17:04.451009 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-17 01:17:04.451013 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-17 01:17:04.451021 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:17:04.451025 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:17:04.451029 | orchestrator | 2026-03-17 01:17:04.451033 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-03-17 01:17:04.451037 | orchestrator | Tuesday 17 March 2026 01:13:09 +0000 (0:00:01.048) 0:02:31.249 ********* 2026-03-17 01:17:04.451050 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-17 01:17:04.451060 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-17 01:17:04.451066 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-17 01:17:04.451072 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-17 01:17:04.451082 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-17 01:17:04.451089 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-17 01:17:04.451103 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-17 01:17:04.451110 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-17 01:17:04.451117 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-17 01:17:04.451124 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-17 01:17:04.451131 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-17 01:17:04.451142 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-17 01:17:04.451168 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:17:04.451174 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:17:04.451179 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:17:04.451190 | orchestrator | 2026-03-17 01:17:04.451194 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-03-17 01:17:04.451198 | orchestrator | Tuesday 17 March 2026 01:13:13 +0000 (0:00:04.535) 0:02:35.785 ********* 2026-03-17 01:17:04.451202 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-17 01:17:04.451207 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-17 01:17:04.451211 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-17 01:17:04.451215 | orchestrator | 2026-03-17 01:17:04.451219 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-03-17 01:17:04.451223 | orchestrator | Tuesday 17 March 2026 01:13:15 +0000 (0:00:01.501) 0:02:37.286 ********* 2026-03-17 01:17:04.451227 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-17 01:17:04.451236 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-17 01:17:04.451246 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-17 01:17:04.451251 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-17 01:17:04.451255 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-17 01:17:04.451259 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-17 01:17:04.451267 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-17 01:17:04.451271 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-17 01:17:04.451276 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-17 01:17:04.451285 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-17 01:17:04.451290 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-17 01:17:04.451294 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-17 01:17:04.451298 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:17:04.451306 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:17:04.451310 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:17:04.451314 | orchestrator | 2026-03-17 01:17:04.451317 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-03-17 01:17:04.451321 | orchestrator | Tuesday 17 March 2026 01:13:32 +0000 (0:00:17.080) 0:02:54.367 ********* 2026-03-17 01:17:04.451325 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:17:04.451329 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:17:04.451333 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:17:04.451337 | orchestrator | 2026-03-17 01:17:04.451341 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-03-17 01:17:04.451345 | orchestrator | Tuesday 17 March 2026 01:13:34 +0000 (0:00:02.113) 0:02:56.480 ********* 2026-03-17 01:17:04.451350 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-17 01:17:04.451354 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-17 01:17:04.451360 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-17 01:17:04.451364 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-17 01:17:04.451368 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-17 01:17:04.451373 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-17 01:17:04.451376 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-17 01:17:04.451380 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-17 01:17:04.451385 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-17 01:17:04.451389 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-17 01:17:04.451392 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-17 01:17:04.451397 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-17 01:17:04.451401 | orchestrator | 2026-03-17 01:17:04.451405 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-03-17 01:17:04.451409 | orchestrator | Tuesday 17 March 2026 01:13:39 +0000 (0:00:04.671) 0:03:01.152 ********* 2026-03-17 01:17:04.451413 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-17 01:17:04.451416 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-17 01:17:04.451420 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-17 01:17:04.451428 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-17 01:17:04.451432 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-17 01:17:04.451436 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-17 01:17:04.451440 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-17 01:17:04.451444 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-17 01:17:04.451449 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-17 01:17:04.451455 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-17 01:17:04.451461 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-17 01:17:04.451470 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-17 01:17:04.451479 | orchestrator | 2026-03-17 01:17:04.451486 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-03-17 01:17:04.451492 | orchestrator | Tuesday 17 March 2026 01:13:43 +0000 (0:00:04.447) 0:03:05.600 ********* 2026-03-17 01:17:04.451497 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-17 01:17:04.451503 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-17 01:17:04.451509 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-17 01:17:04.451515 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-17 01:17:04.451521 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-17 01:17:04.451527 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-17 01:17:04.451534 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-17 01:17:04.451541 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-17 01:17:04.451548 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-17 01:17:04.451552 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-17 01:17:04.451556 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-17 01:17:04.451656 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-17 01:17:04.451674 | orchestrator | 2026-03-17 01:17:04.451678 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2026-03-17 01:17:04.451682 | orchestrator | Tuesday 17 March 2026 01:13:48 +0000 (0:00:04.526) 0:03:10.126 ********* 2026-03-17 01:17:04.451687 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-17 01:17:04.451702 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-17 01:17:04.451712 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-17 01:17:04.451716 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-17 01:17:04.451721 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-17 01:17:04.451725 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-17 01:17:04.451729 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-17 01:17:04.451739 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-17 01:17:04.451747 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-17 01:17:04.451752 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-17 01:17:04.451756 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-17 01:17:04.451760 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-17 01:17:04.451764 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:17:04.451768 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:17:04.451779 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:17:04.451787 | orchestrator | 2026-03-17 01:17:04.451791 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-17 01:17:04.451795 | orchestrator | Tuesday 17 March 2026 01:13:51 +0000 (0:00:03.580) 0:03:13.708 ********* 2026-03-17 01:17:04.451798 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:17:04.451802 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:17:04.451807 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:17:04.451810 | orchestrator | 2026-03-17 01:17:04.451814 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-03-17 01:17:04.451818 | orchestrator | Tuesday 17 March 2026 01:13:52 +0000 (0:00:01.023) 0:03:14.731 ********* 2026-03-17 01:17:04.451822 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:17:04.451826 | orchestrator | 2026-03-17 01:17:04.451830 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-03-17 01:17:04.451834 | orchestrator | Tuesday 17 March 2026 01:13:54 +0000 (0:00:01.974) 0:03:16.705 ********* 2026-03-17 01:17:04.451838 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:17:04.451842 | orchestrator | 2026-03-17 01:17:04.451845 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-03-17 01:17:04.451849 | orchestrator | Tuesday 17 March 2026 01:13:56 +0000 (0:00:01.794) 0:03:18.500 ********* 2026-03-17 01:17:04.451853 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:17:04.451857 | orchestrator | 2026-03-17 01:17:04.451861 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-03-17 01:17:04.451865 | orchestrator | Tuesday 17 March 2026 01:13:58 +0000 (0:00:01.808) 0:03:20.309 ********* 2026-03-17 01:17:04.451869 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:17:04.451873 | orchestrator | 2026-03-17 01:17:04.451877 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-03-17 01:17:04.451881 | orchestrator | Tuesday 17 March 2026 01:14:00 +0000 (0:00:01.850) 0:03:22.159 ********* 2026-03-17 01:17:04.451884 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:17:04.451888 | orchestrator | 2026-03-17 01:17:04.451892 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-17 01:17:04.451896 | orchestrator | Tuesday 17 March 2026 01:14:20 +0000 (0:00:20.394) 0:03:42.554 ********* 2026-03-17 01:17:04.451900 | orchestrator | 2026-03-17 01:17:04.451904 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-17 01:17:04.451908 | orchestrator | Tuesday 17 March 2026 01:14:20 +0000 (0:00:00.105) 0:03:42.659 ********* 2026-03-17 01:17:04.451912 | orchestrator | 2026-03-17 01:17:04.451916 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-17 01:17:04.451920 | orchestrator | Tuesday 17 March 2026 01:14:20 +0000 (0:00:00.151) 0:03:42.811 ********* 2026-03-17 01:17:04.451924 | orchestrator | 2026-03-17 01:17:04.451928 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-03-17 01:17:04.451932 | orchestrator | Tuesday 17 March 2026 01:14:21 +0000 (0:00:00.172) 0:03:42.983 ********* 2026-03-17 01:17:04.451936 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:17:04.451940 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:17:04.451944 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:17:04.451947 | orchestrator | 2026-03-17 01:17:04.451951 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-03-17 01:17:04.451955 | orchestrator | Tuesday 17 March 2026 01:14:31 +0000 (0:00:10.769) 0:03:53.753 ********* 2026-03-17 01:17:04.451959 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:17:04.451963 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:17:04.451966 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:17:04.451974 | orchestrator | 2026-03-17 01:17:04.451978 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-03-17 01:17:04.451982 | orchestrator | Tuesday 17 March 2026 01:14:42 +0000 (0:00:10.823) 0:04:04.576 ********* 2026-03-17 01:17:04.451986 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:17:04.451990 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:17:04.451994 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:17:04.451998 | orchestrator | 2026-03-17 01:17:04.452001 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-03-17 01:17:04.452006 | orchestrator | Tuesday 17 March 2026 01:14:52 +0000 (0:00:09.841) 0:04:14.417 ********* 2026-03-17 01:17:04.452010 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:17:04.452013 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:17:04.452018 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:17:04.452021 | orchestrator | 2026-03-17 01:17:04.452025 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-03-17 01:17:04.452029 | orchestrator | Tuesday 17 March 2026 01:15:01 +0000 (0:00:08.444) 0:04:22.862 ********* 2026-03-17 01:17:04.452033 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:17:04.452037 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:17:04.452040 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:17:04.452044 | orchestrator | 2026-03-17 01:17:04.452048 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 01:17:04.452053 | orchestrator | testbed-node-0 : ok=57  changed=38  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-17 01:17:04.452057 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-17 01:17:04.452062 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-17 01:17:04.452065 | orchestrator | 2026-03-17 01:17:04.452069 | orchestrator | 2026-03-17 01:17:04.452073 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 01:17:04.452077 | orchestrator | Tuesday 17 March 2026 01:15:06 +0000 (0:00:05.344) 0:04:28.206 ********* 2026-03-17 01:17:04.452085 | orchestrator | =============================================================================== 2026-03-17 01:17:04.452092 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 20.39s 2026-03-17 01:17:04.452096 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 17.08s 2026-03-17 01:17:04.452100 | orchestrator | octavia : Adding octavia related roles --------------------------------- 16.08s 2026-03-17 01:17:04.452104 | orchestrator | octavia : Add rules for security groups -------------------------------- 14.87s 2026-03-17 01:17:04.452107 | orchestrator | octavia : Restart octavia-driver-agent container ----------------------- 10.82s 2026-03-17 01:17:04.452111 | orchestrator | octavia : Restart octavia-api container -------------------------------- 10.77s 2026-03-17 01:17:04.452115 | orchestrator | octavia : Restart octavia-health-manager container ---------------------- 9.84s 2026-03-17 01:17:04.452119 | orchestrator | octavia : Create security groups for octavia ---------------------------- 9.63s 2026-03-17 01:17:04.452123 | orchestrator | octavia : Restart octavia-housekeeping container ------------------------ 8.45s 2026-03-17 01:17:04.452127 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 7.96s 2026-03-17 01:17:04.452130 | orchestrator | octavia : Get security groups for octavia ------------------------------- 7.17s 2026-03-17 01:17:04.452134 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 6.49s 2026-03-17 01:17:04.452138 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 5.56s 2026-03-17 01:17:04.452142 | orchestrator | octavia : Restart octavia-worker container ------------------------------ 5.34s 2026-03-17 01:17:04.452146 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 5.21s 2026-03-17 01:17:04.452150 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 5.18s 2026-03-17 01:17:04.452157 | orchestrator | octavia : Update loadbalancer management subnet ------------------------- 4.95s 2026-03-17 01:17:04.452161 | orchestrator | service-ks-register : octavia | Creating services ----------------------- 4.75s 2026-03-17 01:17:04.452165 | orchestrator | octavia : Create nova keypair for amphora ------------------------------- 4.67s 2026-03-17 01:17:04.452169 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 4.67s 2026-03-17 01:17:07.485665 | orchestrator | 2026-03-17 01:17:07 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-17 01:17:10.528368 | orchestrator | 2026-03-17 01:17:10 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-17 01:17:13.572790 | orchestrator | 2026-03-17 01:17:13 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-17 01:17:16.609369 | orchestrator | 2026-03-17 01:17:16 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-17 01:17:19.648758 | orchestrator | 2026-03-17 01:17:19 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-17 01:17:22.685356 | orchestrator | 2026-03-17 01:17:22 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-17 01:17:25.725610 | orchestrator | 2026-03-17 01:17:25 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-17 01:17:28.767417 | orchestrator | 2026-03-17 01:17:28 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-17 01:17:31.808959 | orchestrator | 2026-03-17 01:17:31 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-17 01:17:34.842885 | orchestrator | 2026-03-17 01:17:34 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-17 01:17:37.884482 | orchestrator | 2026-03-17 01:17:37 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-17 01:17:40.923145 | orchestrator | 2026-03-17 01:17:40 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-17 01:17:43.963539 | orchestrator | 2026-03-17 01:17:43 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-17 01:17:47.003657 | orchestrator | 2026-03-17 01:17:47 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-17 01:17:50.046135 | orchestrator | 2026-03-17 01:17:50 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-17 01:17:53.086305 | orchestrator | 2026-03-17 01:17:53 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-17 01:17:56.125524 | orchestrator | 2026-03-17 01:17:56 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-17 01:17:59.162410 | orchestrator | 2026-03-17 01:17:59 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-17 01:18:02.203885 | orchestrator | 2026-03-17 01:18:02 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-17 01:18:05.242758 | orchestrator | 2026-03-17 01:18:05.424767 | orchestrator | 2026-03-17 01:18:05.428114 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Tue Mar 17 01:18:05 UTC 2026 2026-03-17 01:18:05.428161 | orchestrator | 2026-03-17 01:18:05.757136 | orchestrator | ok: Runtime: 0:34:30.042505 2026-03-17 01:18:06.015912 | 2026-03-17 01:18:06.016061 | TASK [Bootstrap services] 2026-03-17 01:18:06.748921 | orchestrator | 2026-03-17 01:18:06.749032 | orchestrator | # BOOTSTRAP 2026-03-17 01:18:06.749050 | orchestrator | 2026-03-17 01:18:06.749057 | orchestrator | + set -e 2026-03-17 01:18:06.749063 | orchestrator | + echo 2026-03-17 01:18:06.749071 | orchestrator | + echo '# BOOTSTRAP' 2026-03-17 01:18:06.749085 | orchestrator | + echo 2026-03-17 01:18:06.749402 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2026-03-17 01:18:06.757954 | orchestrator | + set -e 2026-03-17 01:18:06.758042 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2026-03-17 01:18:11.075916 | orchestrator | 2026-03-17 01:18:11 | INFO  | It takes a moment until task e3efb8df-2139-4952-a6e5-b8f68cfdea4b (flavor-manager) has been started and output is visible here. 2026-03-17 01:18:19.328713 | orchestrator | 2026-03-17 01:18:15 | INFO  | Flavor SCS-1L-1 created 2026-03-17 01:18:19.328785 | orchestrator | 2026-03-17 01:18:15 | INFO  | Flavor SCS-1L-1-5 created 2026-03-17 01:18:19.328794 | orchestrator | 2026-03-17 01:18:15 | INFO  | Flavor SCS-1V-2 created 2026-03-17 01:18:19.328798 | orchestrator | 2026-03-17 01:18:15 | INFO  | Flavor SCS-1V-2-5 created 2026-03-17 01:18:19.328801 | orchestrator | 2026-03-17 01:18:15 | INFO  | Flavor SCS-1V-4 created 2026-03-17 01:18:19.328804 | orchestrator | 2026-03-17 01:18:16 | INFO  | Flavor SCS-1V-4-10 created 2026-03-17 01:18:19.328808 | orchestrator | 2026-03-17 01:18:16 | INFO  | Flavor SCS-1V-8 created 2026-03-17 01:18:19.328811 | orchestrator | 2026-03-17 01:18:16 | INFO  | Flavor SCS-1V-8-20 created 2026-03-17 01:18:19.328819 | orchestrator | 2026-03-17 01:18:16 | INFO  | Flavor SCS-2V-4 created 2026-03-17 01:18:19.328823 | orchestrator | 2026-03-17 01:18:16 | INFO  | Flavor SCS-2V-4-10 created 2026-03-17 01:18:19.328826 | orchestrator | 2026-03-17 01:18:16 | INFO  | Flavor SCS-2V-8 created 2026-03-17 01:18:19.328829 | orchestrator | 2026-03-17 01:18:16 | INFO  | Flavor SCS-2V-8-20 created 2026-03-17 01:18:19.328832 | orchestrator | 2026-03-17 01:18:16 | INFO  | Flavor SCS-2V-16 created 2026-03-17 01:18:19.328835 | orchestrator | 2026-03-17 01:18:16 | INFO  | Flavor SCS-2V-16-50 created 2026-03-17 01:18:19.328840 | orchestrator | 2026-03-17 01:18:16 | INFO  | Flavor SCS-4V-8 created 2026-03-17 01:18:19.328856 | orchestrator | 2026-03-17 01:18:17 | INFO  | Flavor SCS-4V-8-20 created 2026-03-17 01:18:19.328862 | orchestrator | 2026-03-17 01:18:17 | INFO  | Flavor SCS-4V-16 created 2026-03-17 01:18:19.328867 | orchestrator | 2026-03-17 01:18:17 | INFO  | Flavor SCS-4V-16-50 created 2026-03-17 01:18:19.328873 | orchestrator | 2026-03-17 01:18:17 | INFO  | Flavor SCS-4V-32 created 2026-03-17 01:18:19.328878 | orchestrator | 2026-03-17 01:18:17 | INFO  | Flavor SCS-4V-32-100 created 2026-03-17 01:18:19.328884 | orchestrator | 2026-03-17 01:18:18 | INFO  | Flavor SCS-8V-16 created 2026-03-17 01:18:19.328887 | orchestrator | 2026-03-17 01:18:18 | INFO  | Flavor SCS-8V-16-50 created 2026-03-17 01:18:19.328891 | orchestrator | 2026-03-17 01:18:18 | INFO  | Flavor SCS-8V-32 created 2026-03-17 01:18:19.328894 | orchestrator | 2026-03-17 01:18:18 | INFO  | Flavor SCS-8V-32-100 created 2026-03-17 01:18:19.328897 | orchestrator | 2026-03-17 01:18:18 | INFO  | Flavor SCS-16V-32 created 2026-03-17 01:18:19.328900 | orchestrator | 2026-03-17 01:18:18 | INFO  | Flavor SCS-16V-32-100 created 2026-03-17 01:18:19.328903 | orchestrator | 2026-03-17 01:18:18 | INFO  | Flavor SCS-2V-4-20s created 2026-03-17 01:18:19.328907 | orchestrator | 2026-03-17 01:18:18 | INFO  | Flavor SCS-4V-8-50s created 2026-03-17 01:18:19.328910 | orchestrator | 2026-03-17 01:18:19 | INFO  | Flavor SCS-4V-16-100s created 2026-03-17 01:18:19.328913 | orchestrator | 2026-03-17 01:18:19 | INFO  | Flavor SCS-8V-32-100s created 2026-03-17 01:18:20.940365 | orchestrator | 2026-03-17 01:18:20 | INFO  | Trying to run play bootstrap-basic in environment openstack 2026-03-17 01:18:31.156045 | orchestrator | 2026-03-17 01:18:31 | INFO  | Prepare task for execution of bootstrap-basic. 2026-03-17 01:18:31.230936 | orchestrator | 2026-03-17 01:18:31 | INFO  | Task 8ba8566d-d0aa-4a2f-b5df-cba7186307b8 (bootstrap-basic) was prepared for execution. 2026-03-17 01:18:31.230983 | orchestrator | 2026-03-17 01:18:31 | INFO  | It takes a moment until task 8ba8566d-d0aa-4a2f-b5df-cba7186307b8 (bootstrap-basic) has been started and output is visible here. 2026-03-17 01:19:13.843720 | orchestrator | 2026-03-17 01:19:13.843782 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2026-03-17 01:19:13.843789 | orchestrator | 2026-03-17 01:19:13.843792 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-17 01:19:13.843796 | orchestrator | Tuesday 17 March 2026 01:18:34 +0000 (0:00:00.090) 0:00:00.090 ********* 2026-03-17 01:19:13.843799 | orchestrator | ok: [localhost] 2026-03-17 01:19:13.843803 | orchestrator | 2026-03-17 01:19:13.843806 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2026-03-17 01:19:13.843809 | orchestrator | Tuesday 17 March 2026 01:18:36 +0000 (0:00:01.943) 0:00:02.034 ********* 2026-03-17 01:19:13.843813 | orchestrator | ok: [localhost] 2026-03-17 01:19:13.843816 | orchestrator | 2026-03-17 01:19:13.843820 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2026-03-17 01:19:13.843823 | orchestrator | Tuesday 17 March 2026 01:18:43 +0000 (0:00:07.862) 0:00:09.897 ********* 2026-03-17 01:19:13.843826 | orchestrator | changed: [localhost] 2026-03-17 01:19:13.843830 | orchestrator | 2026-03-17 01:19:13.843833 | orchestrator | TASK [Create public network] *************************************************** 2026-03-17 01:19:13.843836 | orchestrator | Tuesday 17 March 2026 01:18:51 +0000 (0:00:07.647) 0:00:17.544 ********* 2026-03-17 01:19:13.843839 | orchestrator | changed: [localhost] 2026-03-17 01:19:13.843842 | orchestrator | 2026-03-17 01:19:13.843848 | orchestrator | TASK [Set public network to default] ******************************************* 2026-03-17 01:19:13.843853 | orchestrator | Tuesday 17 March 2026 01:18:56 +0000 (0:00:04.777) 0:00:22.322 ********* 2026-03-17 01:19:13.843858 | orchestrator | changed: [localhost] 2026-03-17 01:19:13.843862 | orchestrator | 2026-03-17 01:19:13.843867 | orchestrator | TASK [Create public subnet] **************************************************** 2026-03-17 01:19:13.843872 | orchestrator | Tuesday 17 March 2026 01:19:02 +0000 (0:00:05.722) 0:00:28.044 ********* 2026-03-17 01:19:13.843877 | orchestrator | changed: [localhost] 2026-03-17 01:19:13.843882 | orchestrator | 2026-03-17 01:19:13.843888 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2026-03-17 01:19:13.843893 | orchestrator | Tuesday 17 March 2026 01:19:06 +0000 (0:00:04.159) 0:00:32.203 ********* 2026-03-17 01:19:13.843898 | orchestrator | changed: [localhost] 2026-03-17 01:19:13.843902 | orchestrator | 2026-03-17 01:19:13.843905 | orchestrator | TASK [Create manager role] ***************************************************** 2026-03-17 01:19:13.843913 | orchestrator | Tuesday 17 March 2026 01:19:10 +0000 (0:00:03.939) 0:00:36.143 ********* 2026-03-17 01:19:13.843916 | orchestrator | ok: [localhost] 2026-03-17 01:19:13.843919 | orchestrator | 2026-03-17 01:19:13.843922 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 01:19:13.843926 | orchestrator | localhost : ok=8  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 01:19:13.843930 | orchestrator | 2026-03-17 01:19:13.843933 | orchestrator | 2026-03-17 01:19:13.843936 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 01:19:13.843939 | orchestrator | Tuesday 17 March 2026 01:19:13 +0000 (0:00:03.539) 0:00:39.682 ********* 2026-03-17 01:19:13.843942 | orchestrator | =============================================================================== 2026-03-17 01:19:13.843945 | orchestrator | Get volume type LUKS ---------------------------------------------------- 7.86s 2026-03-17 01:19:13.843958 | orchestrator | Create volume type LUKS ------------------------------------------------- 7.65s 2026-03-17 01:19:13.843962 | orchestrator | Set public network to default ------------------------------------------- 5.72s 2026-03-17 01:19:13.843965 | orchestrator | Create public network --------------------------------------------------- 4.78s 2026-03-17 01:19:13.843968 | orchestrator | Create public subnet ---------------------------------------------------- 4.16s 2026-03-17 01:19:13.843971 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 3.94s 2026-03-17 01:19:13.843974 | orchestrator | Create manager role ----------------------------------------------------- 3.54s 2026-03-17 01:19:13.843977 | orchestrator | Gathering Facts --------------------------------------------------------- 1.94s 2026-03-17 01:19:15.844883 | orchestrator | 2026-03-17 01:19:15 | INFO  | It takes a moment until task 06b2659e-4131-4ab3-a199-703ff5713f53 (image-manager) has been started and output is visible here. 2026-03-17 01:19:18.808693 | orchestrator | Failed to contact the endpoint at https://api.testbed.osism.xyz:9292 for discovery. Fallback to using that endpoint as the base url. 2026-03-17 01:19:18.808744 | orchestrator | Failed to contact the endpoint at https://api.testbed.osism.xyz:9292 for discovery. Fallback to using that endpoint as the base url. 2026-03-17 01:19:18.808749 | orchestrator | ╭───────────────────── Traceback (most recent call last) ──────────────────────╮ 2026-03-17 01:19:18.808754 | orchestrator | │ /usr/local/lib/python3.13/site-packages/openstack_image_manager/main.py:131 │ 2026-03-17 01:19:18.808758 | orchestrator | │ in create_cli_args │ 2026-03-17 01:19:18.808761 | orchestrator | │ │ 2026-03-17 01:19:18.808764 | orchestrator | │ 128 │ │ logger.add(sys.stderr, format=log_fmt, level=level, colorize= │ 2026-03-17 01:19:18.808768 | orchestrator | │ 129 │ │ │ 2026-03-17 01:19:18.808771 | orchestrator | │ 130 │ │ if __name__ == "__main__" or __name__ == "openstack_image_man │ 2026-03-17 01:19:18.808774 | orchestrator | │ ❱ 131 │ │ │ self.main() │ 2026-03-17 01:19:18.808777 | orchestrator | │ 132 │ │ 2026-03-17 01:19:18.808781 | orchestrator | │ 133 │ def read_image_files(self, return_all_images=False) -> list: │ 2026-03-17 01:19:18.808784 | orchestrator | │ 134 │ │ """Read all YAML files in self.CONF.images""" │ 2026-03-17 01:19:18.808787 | orchestrator | │ │ 2026-03-17 01:19:18.808790 | orchestrator | │ /usr/local/lib/python3.13/site-packages/openstack_image_manager/main.py:258 │ 2026-03-17 01:19:18.808793 | orchestrator | │ in main │ 2026-03-17 01:19:18.808796 | orchestrator | │ │ 2026-03-17 01:19:18.808799 | orchestrator | │ 255 │ │ else: │ 2026-03-17 01:19:18.808802 | orchestrator | │ 256 │ │ │ self.create_connection() │ 2026-03-17 01:19:18.808808 | orchestrator | │ 257 │ │ │ images = self.read_image_files() │ 2026-03-17 01:19:18.808812 | orchestrator | │ ❱ 258 │ │ │ managed_images = self.process_images(images) │ 2026-03-17 01:19:18.808815 | orchestrator | │ 259 │ │ │ │ 2026-03-17 01:19:18.808818 | orchestrator | │ 260 │ │ │ # ignore all non-specified images when using --filter │ 2026-03-17 01:19:18.808821 | orchestrator | │ 261 │ │ │ if self.CONF.filter: │ 2026-03-17 01:19:18.808824 | orchestrator | │ │ 2026-03-17 01:19:18.808827 | orchestrator | │ /usr/local/lib/python3.13/site-packages/openstack_image_manager/main.py:375 │ 2026-03-17 01:19:18.808837 | orchestrator | │ in process_images │ 2026-03-17 01:19:18.808841 | orchestrator | │ │ 2026-03-17 01:19:18.808846 | orchestrator | │ 372 │ │ │ if "image_name" not in image["meta"]: │ 2026-03-17 01:19:18.808851 | orchestrator | │ 373 │ │ │ │ image["meta"]["image_name"] = image["name"] │ 2026-03-17 01:19:18.808857 | orchestrator | │ 374 │ │ │ │ 2026-03-17 01:19:18.808868 | orchestrator | │ ❱ 375 │ │ │ existing_images, imported_image, previous_image = self.pr │ 2026-03-17 01:19:18.808873 | orchestrator | │ 376 │ │ │ │ image, versions, sorted_versions, image["meta"].copy( │ 2026-03-17 01:19:18.808877 | orchestrator | │ 377 │ │ │ ) │ 2026-03-17 01:19:18.808882 | orchestrator | │ 378 │ │ │ managed_images = managed_images.union(existing_images) │ 2026-03-17 01:19:18.808888 | orchestrator | │ │ 2026-03-17 01:19:18.808892 | orchestrator | │ /usr/local/lib/python3.13/site-packages/openstack_image_manager/main.py:548 │ 2026-03-17 01:19:18.808897 | orchestrator | │ in process_image │ 2026-03-17 01:19:18.808901 | orchestrator | │ │ 2026-03-17 01:19:18.808906 | orchestrator | │ 545 │ │ Returns: │ 2026-03-17 01:19:18.808911 | orchestrator | │ 546 │ │ │ Tuple with (existing_images, imported_image, previous_ima │ 2026-03-17 01:19:18.808915 | orchestrator | │ 547 │ │ """ │ 2026-03-17 01:19:18.808919 | orchestrator | │ ❱ 548 │ │ cloud_images = self.get_images() │ 2026-03-17 01:19:18.808924 | orchestrator | │ 549 │ │ │ 2026-03-17 01:19:18.808938 | orchestrator | │ 550 │ │ existing_images: Set[str] = set() │ 2026-03-17 01:19:18.808944 | orchestrator | │ 551 │ │ imported_image = None │ 2026-03-17 01:19:18.808950 | orchestrator | │ │ 2026-03-17 01:19:18.808955 | orchestrator | │ /usr/local/lib/python3.13/site-packages/openstack_image_manager/main.py:469 │ 2026-03-17 01:19:18.808960 | orchestrator | │ in get_images │ 2026-03-17 01:19:18.808965 | orchestrator | │ │ 2026-03-17 01:19:18.808970 | orchestrator | │ 466 │ │ """ │ 2026-03-17 01:19:18.808976 | orchestrator | │ 467 │ │ result = {} │ 2026-03-17 01:19:18.808981 | orchestrator | │ 468 │ │ │ 2026-03-17 01:19:18.808986 | orchestrator | │ ❱ 469 │ │ for image in self.conn.image.images(): │ 2026-03-17 01:19:18.808990 | orchestrator | │ 470 │ │ │ if self.CONF.tag in image.tags and ( │ 2026-03-17 01:19:18.808993 | orchestrator | │ 471 │ │ │ │ image.visibility == "public" │ 2026-03-17 01:19:18.808996 | orchestrator | │ 472 │ │ │ │ or image.owner == self.conn.current_project_id │ 2026-03-17 01:19:18.808999 | orchestrator | │ │ 2026-03-17 01:19:18.809002 | orchestrator | │ /usr/local/lib/python3.13/site-packages/openstack/service_description.py:91 │ 2026-03-17 01:19:18.809005 | orchestrator | │ in __get__ │ 2026-03-17 01:19:18.809012 | orchestrator | │ │ 2026-03-17 01:19:18.809017 | orchestrator | │ 88 │ │ if instance is None: │ 2026-03-17 01:19:18.809020 | orchestrator | │ 89 │ │ │ return self │ 2026-03-17 01:19:18.809023 | orchestrator | │ 90 │ │ if self.service_type not in instance._proxies: │ 2026-03-17 01:19:18.809026 | orchestrator | │ ❱ 91 │ │ │ proxy = self._make_proxy(instance) │ 2026-03-17 01:19:18.809029 | orchestrator | │ 92 │ │ │ if not isinstance(proxy, _ServiceDisabledProxyShim): │ 2026-03-17 01:19:18.809032 | orchestrator | │ 93 │ │ │ │ # The keystone proxy has a method called get_endpoint │ 2026-03-17 01:19:18.809036 | orchestrator | │ 94 │ │ │ │ # that is about managing keystone endpoints. This is │ 2026-03-17 01:19:18.809039 | orchestrator | │ │ 2026-03-17 01:19:18.809042 | orchestrator | │ /usr/local/lib/python3.13/site-packages/openstack/service_description.py:293 │ 2026-03-17 01:19:18.809045 | orchestrator | │ in _make_proxy │ 2026-03-17 01:19:18.809048 | orchestrator | │ │ 2026-03-17 01:19:18.809051 | orchestrator | │ 290 │ │ if found_version is None: │ 2026-03-17 01:19:18.809054 | orchestrator | │ 291 │ │ │ region_name = instance.config.get_region_name(self.service │ 2026-03-17 01:19:18.809057 | orchestrator | │ 292 │ │ │ if version_kwargs: │ 2026-03-17 01:19:18.809060 | orchestrator | │ ❱ 293 │ │ │ │ raise exceptions.NotSupported( │ 2026-03-17 01:19:18.809063 | orchestrator | │ 294 │ │ │ │ │ f"The {self.service_type} service for " │ 2026-03-17 01:19:18.809066 | orchestrator | │ 295 │ │ │ │ │ f"{instance.name}:{region_name} exists but does no │ 2026-03-17 01:19:18.809069 | orchestrator | │ 296 │ │ │ │ │ f"any supported versions." │ 2026-03-17 01:19:18.809077 | orchestrator | ╰──────────────────────────────────────────────────────────────────────────────╯ 2026-03-17 01:19:18.809081 | orchestrator | NotSupported: The image service for admin: exists but does not have any 2026-03-17 01:19:18.809084 | orchestrator | supported versions. 2026-03-17 01:19:19.178678 | orchestrator | ERROR 2026-03-17 01:19:19.178951 | orchestrator | { 2026-03-17 01:19:19.178996 | orchestrator | "delta": "0:01:12.659632", 2026-03-17 01:19:19.179023 | orchestrator | "end": "2026-03-17 01:19:19.034398", 2026-03-17 01:19:19.179045 | orchestrator | "msg": "non-zero return code", 2026-03-17 01:19:19.179066 | orchestrator | "rc": 1, 2026-03-17 01:19:19.179085 | orchestrator | "start": "2026-03-17 01:18:06.374766" 2026-03-17 01:19:19.179104 | orchestrator | } failure 2026-03-17 01:19:19.188838 | 2026-03-17 01:19:19.188936 | PLAY RECAP 2026-03-17 01:19:19.188993 | orchestrator | ok: 22 changed: 9 unreachable: 0 failed: 1 skipped: 3 rescued: 0 ignored: 0 2026-03-17 01:19:19.189029 | 2026-03-17 01:19:19.452836 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2026-03-17 01:19:19.454255 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-03-17 01:19:20.243628 | 2026-03-17 01:19:20.243805 | PLAY [Post output play] 2026-03-17 01:19:20.261495 | 2026-03-17 01:19:20.261633 | LOOP [stage-output : Register sources] 2026-03-17 01:19:20.325539 | 2026-03-17 01:19:20.325861 | TASK [stage-output : Check sudo] 2026-03-17 01:19:21.137711 | orchestrator | sudo: a password is required 2026-03-17 01:19:21.367607 | orchestrator | ok: Runtime: 0:00:00.011160 2026-03-17 01:19:21.382419 | 2026-03-17 01:19:21.382611 | LOOP [stage-output : Set source and destination for files and folders] 2026-03-17 01:19:21.420513 | 2026-03-17 01:19:21.420821 | TASK [stage-output : Build a list of source, dest dictionaries] 2026-03-17 01:19:21.498614 | orchestrator | ok 2026-03-17 01:19:21.507252 | 2026-03-17 01:19:21.507388 | LOOP [stage-output : Ensure target folders exist] 2026-03-17 01:19:21.940551 | orchestrator | ok: "docs" 2026-03-17 01:19:21.940858 | 2026-03-17 01:19:22.172603 | orchestrator | ok: "artifacts" 2026-03-17 01:19:22.398671 | orchestrator | ok: "logs" 2026-03-17 01:19:22.418793 | 2026-03-17 01:19:22.418980 | LOOP [stage-output : Copy files and folders to staging folder] 2026-03-17 01:19:22.453940 | 2026-03-17 01:19:22.454179 | TASK [stage-output : Make all log files readable] 2026-03-17 01:19:22.695031 | orchestrator | ok 2026-03-17 01:19:22.704177 | 2026-03-17 01:19:22.704322 | TASK [stage-output : Rename log files that match extensions_to_txt] 2026-03-17 01:19:22.739783 | orchestrator | skipping: Conditional result was False 2026-03-17 01:19:22.755183 | 2026-03-17 01:19:22.755327 | TASK [stage-output : Discover log files for compression] 2026-03-17 01:19:22.779772 | orchestrator | skipping: Conditional result was False 2026-03-17 01:19:22.795337 | 2026-03-17 01:19:22.795567 | LOOP [stage-output : Archive everything from logs] 2026-03-17 01:19:22.846697 | 2026-03-17 01:19:22.846913 | PLAY [Post cleanup play] 2026-03-17 01:19:22.855206 | 2026-03-17 01:19:22.855318 | TASK [Set cloud fact (Zuul deployment)] 2026-03-17 01:19:22.920069 | orchestrator | ok 2026-03-17 01:19:22.930918 | 2026-03-17 01:19:22.931036 | TASK [Set cloud fact (local deployment)] 2026-03-17 01:19:22.955513 | orchestrator | skipping: Conditional result was False 2026-03-17 01:19:22.966941 | 2026-03-17 01:19:22.967076 | TASK [Clean the cloud environment] 2026-03-17 01:19:25.159055 | orchestrator | 2026-03-17 01:19:25 - clean up servers 2026-03-17 01:19:26.017250 | orchestrator | 2026-03-17 01:19:26 - testbed-manager 2026-03-17 01:19:26.099300 | orchestrator | 2026-03-17 01:19:26 - testbed-node-5 2026-03-17 01:19:26.190563 | orchestrator | 2026-03-17 01:19:26 - testbed-node-3 2026-03-17 01:19:26.274762 | orchestrator | 2026-03-17 01:19:26 - testbed-node-0 2026-03-17 01:19:26.356620 | orchestrator | 2026-03-17 01:19:26 - testbed-node-1 2026-03-17 01:19:26.446716 | orchestrator | 2026-03-17 01:19:26 - testbed-node-2 2026-03-17 01:19:26.532413 | orchestrator | 2026-03-17 01:19:26 - testbed-node-4 2026-03-17 01:19:26.608774 | orchestrator | 2026-03-17 01:19:26 - clean up keypairs 2026-03-17 01:19:26.624419 | orchestrator | 2026-03-17 01:19:26 - testbed 2026-03-17 01:19:26.645271 | orchestrator | 2026-03-17 01:19:26 - wait for servers to be gone 2026-03-17 01:19:39.683810 | orchestrator | 2026-03-17 01:19:39 - clean up ports 2026-03-17 01:19:39.881871 | orchestrator | 2026-03-17 01:19:39 - 14300924-de11-42b9-9da1-c4e0861c0ac3 2026-03-17 01:19:40.162819 | orchestrator | 2026-03-17 01:19:40 - 163fb103-707a-4306-ae8b-2cd495c3ccae 2026-03-17 01:19:40.437956 | orchestrator | 2026-03-17 01:19:40 - 18f92b4f-094a-4e6c-8098-38ce5e7a3bed 2026-03-17 01:19:40.910922 | orchestrator | 2026-03-17 01:19:40 - b84ac277-0fac-4abd-96e6-f5dca955cd90 2026-03-17 01:19:41.163597 | orchestrator | 2026-03-17 01:19:41 - c5d2a8e4-38d8-4f96-9500-cccd965026dd 2026-03-17 01:19:41.403217 | orchestrator | 2026-03-17 01:19:41 - ce891b01-4e51-4f46-9fb8-2c17a3ed1818 2026-03-17 01:19:41.602657 | orchestrator | 2026-03-17 01:19:41 - dad5b81f-a441-454a-a744-5258f6a01e9f 2026-03-17 01:19:41.808859 | orchestrator | 2026-03-17 01:19:41 - clean up volumes 2026-03-17 01:19:41.924686 | orchestrator | 2026-03-17 01:19:41 - testbed-volume-3-node-base 2026-03-17 01:19:41.961027 | orchestrator | 2026-03-17 01:19:41 - testbed-volume-5-node-base 2026-03-17 01:19:42.001804 | orchestrator | 2026-03-17 01:19:42 - testbed-volume-4-node-base 2026-03-17 01:19:42.049507 | orchestrator | 2026-03-17 01:19:42 - testbed-volume-0-node-base 2026-03-17 01:19:42.086131 | orchestrator | 2026-03-17 01:19:42 - testbed-volume-1-node-base 2026-03-17 01:19:42.123918 | orchestrator | 2026-03-17 01:19:42 - testbed-volume-manager-base 2026-03-17 01:19:42.164670 | orchestrator | 2026-03-17 01:19:42 - testbed-volume-5-node-5 2026-03-17 01:19:42.210321 | orchestrator | 2026-03-17 01:19:42 - testbed-volume-6-node-3 2026-03-17 01:19:42.251548 | orchestrator | 2026-03-17 01:19:42 - testbed-volume-2-node-5 2026-03-17 01:19:42.293006 | orchestrator | 2026-03-17 01:19:42 - testbed-volume-4-node-4 2026-03-17 01:19:42.330871 | orchestrator | 2026-03-17 01:19:42 - testbed-volume-1-node-4 2026-03-17 01:19:42.369681 | orchestrator | 2026-03-17 01:19:42 - testbed-volume-7-node-4 2026-03-17 01:19:42.414528 | orchestrator | 2026-03-17 01:19:42 - testbed-volume-3-node-3 2026-03-17 01:19:42.458802 | orchestrator | 2026-03-17 01:19:42 - testbed-volume-8-node-5 2026-03-17 01:19:42.496231 | orchestrator | 2026-03-17 01:19:42 - testbed-volume-2-node-base 2026-03-17 01:19:42.540700 | orchestrator | 2026-03-17 01:19:42 - testbed-volume-0-node-3 2026-03-17 01:19:42.583457 | orchestrator | 2026-03-17 01:19:42 - disconnect routers 2026-03-17 01:19:42.712817 | orchestrator | 2026-03-17 01:19:42 - testbed 2026-03-17 01:19:43.741043 | orchestrator | 2026-03-17 01:19:43 - clean up subnets 2026-03-17 01:19:43.792982 | orchestrator | 2026-03-17 01:19:43 - subnet-testbed-management 2026-03-17 01:19:43.975604 | orchestrator | 2026-03-17 01:19:43 - clean up networks 2026-03-17 01:19:44.164221 | orchestrator | 2026-03-17 01:19:44 - net-testbed-management 2026-03-17 01:19:44.473121 | orchestrator | 2026-03-17 01:19:44 - clean up security groups 2026-03-17 01:19:44.514303 | orchestrator | 2026-03-17 01:19:44 - testbed-node 2026-03-17 01:19:44.625539 | orchestrator | 2026-03-17 01:19:44 - testbed-management 2026-03-17 01:19:44.728885 | orchestrator | 2026-03-17 01:19:44 - clean up floating ips 2026-03-17 01:19:44.760895 | orchestrator | 2026-03-17 01:19:44 - 81.163.193.53 2026-03-17 01:19:45.137397 | orchestrator | 2026-03-17 01:19:45 - clean up routers 2026-03-17 01:19:45.238238 | orchestrator | 2026-03-17 01:19:45 - testbed 2026-03-17 01:19:46.526065 | orchestrator | ok: Runtime: 0:00:22.886019 2026-03-17 01:19:46.529989 | 2026-03-17 01:19:46.530140 | PLAY RECAP 2026-03-17 01:19:46.530265 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2026-03-17 01:19:46.530336 | 2026-03-17 01:19:46.671716 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-03-17 01:19:46.672842 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-03-17 01:19:47.464273 | 2026-03-17 01:19:47.464458 | PLAY [Cleanup play] 2026-03-17 01:19:47.480650 | 2026-03-17 01:19:47.480788 | TASK [Set cloud fact (Zuul deployment)] 2026-03-17 01:19:47.542090 | orchestrator | ok 2026-03-17 01:19:47.549155 | 2026-03-17 01:19:47.549284 | TASK [Set cloud fact (local deployment)] 2026-03-17 01:19:47.583740 | orchestrator | skipping: Conditional result was False 2026-03-17 01:19:47.599847 | 2026-03-17 01:19:47.599994 | TASK [Clean the cloud environment] 2026-03-17 01:19:48.820005 | orchestrator | 2026-03-17 01:19:48 - clean up servers 2026-03-17 01:19:49.305840 | orchestrator | 2026-03-17 01:19:49 - clean up keypairs 2026-03-17 01:19:49.320871 | orchestrator | 2026-03-17 01:19:49 - wait for servers to be gone 2026-03-17 01:19:49.370131 | orchestrator | 2026-03-17 01:19:49 - clean up ports 2026-03-17 01:19:49.454157 | orchestrator | 2026-03-17 01:19:49 - clean up volumes 2026-03-17 01:19:49.526933 | orchestrator | 2026-03-17 01:19:49 - disconnect routers 2026-03-17 01:19:49.557572 | orchestrator | 2026-03-17 01:19:49 - clean up subnets 2026-03-17 01:19:49.581685 | orchestrator | 2026-03-17 01:19:49 - clean up networks 2026-03-17 01:19:49.734982 | orchestrator | 2026-03-17 01:19:49 - clean up security groups 2026-03-17 01:19:49.770356 | orchestrator | 2026-03-17 01:19:49 - clean up floating ips 2026-03-17 01:19:49.794832 | orchestrator | 2026-03-17 01:19:49 - clean up routers 2026-03-17 01:19:50.135815 | orchestrator | ok: Runtime: 0:00:01.465912 2026-03-17 01:19:50.138622 | 2026-03-17 01:19:50.138743 | PLAY RECAP 2026-03-17 01:19:50.138830 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-03-17 01:19:50.138908 | 2026-03-17 01:19:50.264344 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-03-17 01:19:50.265398 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-03-17 01:19:51.015900 | 2026-03-17 01:19:51.016070 | PLAY [Base post-fetch] 2026-03-17 01:19:51.032095 | 2026-03-17 01:19:51.032237 | TASK [fetch-output : Set log path for multiple nodes] 2026-03-17 01:19:51.098606 | orchestrator | skipping: Conditional result was False 2026-03-17 01:19:51.113181 | 2026-03-17 01:19:51.113404 | TASK [fetch-output : Set log path for single node] 2026-03-17 01:19:51.160700 | orchestrator | ok 2026-03-17 01:19:51.168867 | 2026-03-17 01:19:51.168995 | LOOP [fetch-output : Ensure local output dirs] 2026-03-17 01:19:51.660204 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/e0ee52d8e54949f4a7ff2f5852dacab8/work/logs" 2026-03-17 01:19:51.923944 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/e0ee52d8e54949f4a7ff2f5852dacab8/work/artifacts" 2026-03-17 01:19:52.189701 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/e0ee52d8e54949f4a7ff2f5852dacab8/work/docs" 2026-03-17 01:19:52.205656 | 2026-03-17 01:19:52.205854 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-03-17 01:19:53.166529 | orchestrator | changed: .d..t...... ./ 2026-03-17 01:19:53.167112 | orchestrator | changed: All items complete 2026-03-17 01:19:53.167189 | 2026-03-17 01:19:53.883632 | orchestrator | changed: .d..t...... ./ 2026-03-17 01:19:54.615844 | orchestrator | changed: .d..t...... ./ 2026-03-17 01:19:54.637175 | 2026-03-17 01:19:54.637315 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-03-17 01:19:54.668269 | orchestrator | skipping: Conditional result was False 2026-03-17 01:19:54.674677 | orchestrator | skipping: Conditional result was False 2026-03-17 01:19:54.693720 | 2026-03-17 01:19:54.693832 | PLAY RECAP 2026-03-17 01:19:54.693905 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-03-17 01:19:54.693944 | 2026-03-17 01:19:54.822282 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-03-17 01:19:54.823688 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-03-17 01:19:55.562088 | 2026-03-17 01:19:55.562210 | PLAY [Base post] 2026-03-17 01:19:55.575051 | 2026-03-17 01:19:55.575153 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-03-17 01:19:57.028635 | orchestrator | changed 2026-03-17 01:19:57.037948 | 2026-03-17 01:19:57.038066 | PLAY RECAP 2026-03-17 01:19:57.038133 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-03-17 01:19:57.038201 | 2026-03-17 01:19:57.162393 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-03-17 01:19:57.165149 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-03-17 01:19:57.953143 | 2026-03-17 01:19:57.953299 | PLAY [Base post-logs] 2026-03-17 01:19:57.963673 | 2026-03-17 01:19:57.963807 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-03-17 01:19:58.437740 | localhost | changed 2026-03-17 01:19:58.453257 | 2026-03-17 01:19:58.453450 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-03-17 01:19:58.491251 | localhost | ok 2026-03-17 01:19:58.499090 | 2026-03-17 01:19:58.499247 | TASK [Set zuul-log-path fact] 2026-03-17 01:19:58.516874 | localhost | ok 2026-03-17 01:19:58.528458 | 2026-03-17 01:19:58.528589 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-03-17 01:19:58.554684 | localhost | ok 2026-03-17 01:19:58.562240 | 2026-03-17 01:19:58.562511 | TASK [upload-logs : Create log directories] 2026-03-17 01:19:59.047726 | localhost | changed 2026-03-17 01:19:59.050182 | 2026-03-17 01:19:59.050270 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-03-17 01:19:59.520746 | localhost -> localhost | ok: Runtime: 0:00:00.007261 2026-03-17 01:19:59.529529 | 2026-03-17 01:19:59.529701 | TASK [upload-logs : Upload logs to log server] 2026-03-17 01:20:00.087659 | localhost | Output suppressed because no_log was given 2026-03-17 01:20:00.091647 | 2026-03-17 01:20:00.091813 | LOOP [upload-logs : Compress console log and json output] 2026-03-17 01:20:00.151625 | localhost | skipping: Conditional result was False 2026-03-17 01:20:00.156407 | localhost | skipping: Conditional result was False 2026-03-17 01:20:00.163281 | 2026-03-17 01:20:00.163545 | LOOP [upload-logs : Upload compressed console log and json output] 2026-03-17 01:20:00.207064 | localhost | skipping: Conditional result was False 2026-03-17 01:20:00.207621 | 2026-03-17 01:20:00.210803 | localhost | skipping: Conditional result was False 2026-03-17 01:20:00.224735 | 2026-03-17 01:20:00.224934 | LOOP [upload-logs : Upload console log and json output]